00:00:00.000 Started by upstream project "spdk-dpdk-per-patch" build number 225 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.033 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.060 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.118 Using shallow fetch with depth 1 00:00:00.118 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.118 > git --version # timeout=10 00:00:00.173 > git --version # 'git version 2.39.2' 00:00:00.173 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.174 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.174 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.507 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.517 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.530 Checking out Revision d55dd09e9e6d4661df5d1073790609767cbcb60c (FETCH_HEAD) 00:00:05.530 > git config core.sparsecheckout # timeout=10 00:00:05.542 > git read-tree -mu HEAD # timeout=10 00:00:05.560 > git checkout -f d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=5 00:00:05.578 Commit message: "ansible/roles/custom_facts: Add subsystem info to VMDs' nvmes" 00:00:05.579 > git rev-list --no-walk d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=10 00:00:05.683 [Pipeline] Start of Pipeline 00:00:05.694 [Pipeline] library 00:00:05.695 Loading library shm_lib@master 00:00:05.695 Library shm_lib@master is cached. Copying from home. 00:00:05.709 [Pipeline] node 00:00:05.716 Running on GP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.717 [Pipeline] { 00:00:05.728 [Pipeline] catchError 00:00:05.730 [Pipeline] { 00:00:05.741 [Pipeline] wrap 00:00:05.749 [Pipeline] { 00:00:05.757 [Pipeline] stage 00:00:05.759 [Pipeline] { (Prologue) 00:00:05.912 [Pipeline] sh 00:00:06.218 + logger -p user.info -t JENKINS-CI 00:00:06.235 [Pipeline] echo 00:00:06.237 Node: GP12 00:00:06.245 [Pipeline] sh 00:00:06.535 [Pipeline] setCustomBuildProperty 00:00:06.550 [Pipeline] echo 00:00:06.552 Cleanup processes 00:00:06.557 [Pipeline] sh 00:00:06.837 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.837 961048 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.848 [Pipeline] sh 00:00:07.123 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.123 ++ grep -v 'sudo pgrep' 00:00:07.123 ++ awk '{print $1}' 00:00:07.123 + sudo kill -9 00:00:07.123 + true 00:00:07.137 [Pipeline] cleanWs 00:00:07.146 [WS-CLEANUP] Deleting project workspace... 00:00:07.146 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.152 [WS-CLEANUP] done 00:00:07.157 [Pipeline] setCustomBuildProperty 00:00:07.169 [Pipeline] sh 00:00:07.442 + sudo git config --global --replace-all safe.directory '*' 00:00:07.522 [Pipeline] nodesByLabel 00:00:07.523 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.533 [Pipeline] httpRequest 00:00:07.537 HttpMethod: GET 00:00:07.537 URL: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:07.542 Sending request to url: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:07.543 Response Code: HTTP/1.1 200 OK 00:00:07.544 Success: Status code 200 is in the accepted range: 200,404 00:00:07.544 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:08.098 [Pipeline] sh 00:00:08.379 + tar --no-same-owner -xf jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:08.398 [Pipeline] httpRequest 00:00:08.404 HttpMethod: GET 00:00:08.404 URL: http://10.211.164.101/packages/spdk_1b4773b8f2e2f701efeb92fba48b0e952ce56001.tar.gz 00:00:08.405 Sending request to url: http://10.211.164.101/packages/spdk_1b4773b8f2e2f701efeb92fba48b0e952ce56001.tar.gz 00:00:08.406 Response Code: HTTP/1.1 200 OK 00:00:08.407 Success: Status code 200 is in the accepted range: 200,404 00:00:08.407 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1b4773b8f2e2f701efeb92fba48b0e952ce56001.tar.gz 00:00:22.208 [Pipeline] sh 00:00:22.489 + tar --no-same-owner -xf spdk_1b4773b8f2e2f701efeb92fba48b0e952ce56001.tar.gz 00:00:25.032 [Pipeline] sh 00:00:25.313 + git -C spdk log --oneline -n5 00:00:25.313 1b4773b8f dpdk/crypto: increase RTE_CRYPTO_MAX_DEVS to fit QAT SYM and ASYM VFs 00:00:25.313 bf8dcb56e rpc: add validation for timeout value 00:00:25.313 6d8618afc lib/nvmf: Add get ANA state API 00:00:25.313 60d80d591 bdev/nvme: Fix pending resets to move to next ctrlr 00:00:25.313 2e6e9553a raid: update base_info->blockcnt 00:00:25.326 [Pipeline] sh 00:00:25.607 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/89/22689/1 00:00:26.542 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:26.542 * branch refs/changes/89/22689/1 -> FETCH_HEAD 00:00:26.556 [Pipeline] sh 00:00:26.833 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:27.767 Previous HEAD position was afe4186365 pmdinfogen: avoid empty string in ELFSymbol() 00:00:27.767 HEAD is now at b5bfcf3f75 isal: compile compress_isal PMD without system-wide libisal 00:00:27.776 [Pipeline] } 00:00:27.793 [Pipeline] // stage 00:00:27.805 [Pipeline] stage 00:00:27.806 [Pipeline] { (Prepare) 00:00:27.823 [Pipeline] writeFile 00:00:27.842 [Pipeline] sh 00:00:28.119 + logger -p user.info -t JENKINS-CI 00:00:28.131 [Pipeline] sh 00:00:28.408 + logger -p user.info -t JENKINS-CI 00:00:28.419 [Pipeline] sh 00:00:28.696 + cat autorun-spdk.conf 00:00:28.696 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.696 SPDK_TEST_NVMF=1 00:00:28.696 SPDK_TEST_NVME_CLI=1 00:00:28.696 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.696 SPDK_TEST_NVMF_NICS=e810 00:00:28.696 SPDK_TEST_VFIOUSER=1 00:00:28.696 SPDK_RUN_UBSAN=1 00:00:28.696 NET_TYPE=phy 00:00:28.702 RUN_NIGHTLY= 00:00:28.708 [Pipeline] readFile 00:00:28.735 [Pipeline] withEnv 00:00:28.737 [Pipeline] { 00:00:28.753 [Pipeline] sh 00:00:29.033 + set -ex 00:00:29.033 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:29.033 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:29.033 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.033 ++ SPDK_TEST_NVMF=1 00:00:29.033 ++ SPDK_TEST_NVME_CLI=1 00:00:29.033 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.033 ++ SPDK_TEST_NVMF_NICS=e810 00:00:29.033 ++ SPDK_TEST_VFIOUSER=1 00:00:29.033 ++ SPDK_RUN_UBSAN=1 00:00:29.033 ++ NET_TYPE=phy 00:00:29.033 ++ RUN_NIGHTLY= 00:00:29.033 + case $SPDK_TEST_NVMF_NICS in 00:00:29.033 + DRIVERS=ice 00:00:29.033 + [[ tcp == \r\d\m\a ]] 00:00:29.033 + [[ -n ice ]] 00:00:29.033 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:29.033 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:33.228 rmmod: ERROR: Module irdma is not currently loaded 00:00:33.228 rmmod: ERROR: Module i40iw is not currently loaded 00:00:33.228 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:33.228 + true 00:00:33.228 + for D in $DRIVERS 00:00:33.228 + sudo modprobe ice 00:00:33.228 + exit 0 00:00:33.237 [Pipeline] } 00:00:33.255 [Pipeline] // withEnv 00:00:33.260 [Pipeline] } 00:00:33.275 [Pipeline] // stage 00:00:33.283 [Pipeline] catchError 00:00:33.285 [Pipeline] { 00:00:33.295 [Pipeline] timeout 00:00:33.295 Timeout set to expire in 40 min 00:00:33.297 [Pipeline] { 00:00:33.312 [Pipeline] stage 00:00:33.314 [Pipeline] { (Tests) 00:00:33.331 [Pipeline] sh 00:00:33.611 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.611 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.611 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.611 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:33.611 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.611 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:33.611 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:33.611 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:33.611 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:33.611 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:33.611 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.611 + source /etc/os-release 00:00:33.611 ++ NAME='Fedora Linux' 00:00:33.611 ++ VERSION='38 (Cloud Edition)' 00:00:33.611 ++ ID=fedora 00:00:33.611 ++ VERSION_ID=38 00:00:33.611 ++ VERSION_CODENAME= 00:00:33.611 ++ PLATFORM_ID=platform:f38 00:00:33.611 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:33.611 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:33.611 ++ LOGO=fedora-logo-icon 00:00:33.611 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:33.611 ++ HOME_URL=https://fedoraproject.org/ 00:00:33.611 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:33.611 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:33.611 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:33.611 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:33.611 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:33.611 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:33.611 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:33.611 ++ SUPPORT_END=2024-05-14 00:00:33.611 ++ VARIANT='Cloud Edition' 00:00:33.611 ++ VARIANT_ID=cloud 00:00:33.611 + uname -a 00:00:33.611 Linux spdk-gp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:33.611 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:34.986 Hugepages 00:00:34.986 node hugesize free / total 00:00:34.986 node0 1048576kB 0 / 0 00:00:34.986 node0 2048kB 0 / 0 00:00:34.986 node1 1048576kB 0 / 0 00:00:34.986 node1 2048kB 0 / 0 00:00:34.986 00:00:34.986 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:34.986 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:34.986 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:34.986 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:34.986 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:34.986 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:34.986 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:34.986 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:34.986 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:34.986 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:34.986 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:34.986 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:34.986 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:34.986 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:34.986 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:34.986 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:34.986 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:34.986 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:34.986 + rm -f /tmp/spdk-ld-path 00:00:34.986 + source autorun-spdk.conf 00:00:34.986 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.986 ++ SPDK_TEST_NVMF=1 00:00:34.986 ++ SPDK_TEST_NVME_CLI=1 00:00:34.986 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.986 ++ SPDK_TEST_NVMF_NICS=e810 00:00:34.986 ++ SPDK_TEST_VFIOUSER=1 00:00:34.986 ++ SPDK_RUN_UBSAN=1 00:00:34.986 ++ NET_TYPE=phy 00:00:34.986 ++ RUN_NIGHTLY= 00:00:34.986 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:34.986 + [[ -n '' ]] 00:00:34.986 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.986 + for M in /var/spdk/build-*-manifest.txt 00:00:34.986 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:34.986 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:34.986 + for M in /var/spdk/build-*-manifest.txt 00:00:34.986 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:34.986 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:34.986 ++ uname 00:00:34.987 + [[ Linux == \L\i\n\u\x ]] 00:00:34.987 + sudo dmesg -T 00:00:34.987 + sudo dmesg --clear 00:00:34.987 + dmesg_pid=961862 00:00:34.987 + [[ Fedora Linux == FreeBSD ]] 00:00:34.987 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.987 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.987 + sudo dmesg -Tw 00:00:34.987 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:34.987 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:34.987 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:34.987 + [[ -x /usr/src/fio-static/fio ]] 00:00:34.987 + export FIO_BIN=/usr/src/fio-static/fio 00:00:34.987 + FIO_BIN=/usr/src/fio-static/fio 00:00:34.987 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:34.987 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:34.987 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:34.987 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.987 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.987 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:34.987 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.987 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.987 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:34.987 Test configuration: 00:00:34.987 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.987 SPDK_TEST_NVMF=1 00:00:34.987 SPDK_TEST_NVME_CLI=1 00:00:34.987 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.987 SPDK_TEST_NVMF_NICS=e810 00:00:34.987 SPDK_TEST_VFIOUSER=1 00:00:34.987 SPDK_RUN_UBSAN=1 00:00:34.987 NET_TYPE=phy 00:00:34.987 RUN_NIGHTLY= 12:28:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:34.987 12:28:33 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:34.987 12:28:33 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:34.987 12:28:33 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:34.987 12:28:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.987 12:28:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.987 12:28:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.987 12:28:33 -- paths/export.sh@5 -- $ export PATH 00:00:34.987 12:28:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.987 12:28:33 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:34.987 12:28:33 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:34.987 12:28:33 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713263313.XXXXXX 00:00:34.987 12:28:33 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713263313.lt2kmD 00:00:34.987 12:28:33 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:34.987 12:28:33 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:34.987 12:28:33 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:34.987 12:28:33 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:34.987 12:28:33 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:34.987 12:28:33 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:34.987 12:28:33 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:34.987 12:28:33 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.987 12:28:33 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:34.987 12:28:33 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:34.987 12:28:33 -- pm/common@17 -- $ local monitor 00:00:34.987 12:28:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.987 12:28:33 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=961896 00:00:34.987 12:28:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.987 12:28:33 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=961898 00:00:34.987 12:28:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.987 12:28:33 -- pm/common@21 -- $ date +%s 00:00:34.987 12:28:33 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=961900 00:00:34.987 12:28:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.987 12:28:33 -- pm/common@21 -- $ date +%s 00:00:34.987 12:28:33 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=961903 00:00:34.987 12:28:33 -- pm/common@21 -- $ date +%s 00:00:34.987 12:28:33 -- pm/common@26 -- $ sleep 1 00:00:34.987 12:28:33 -- pm/common@21 -- $ date +%s 00:00:34.987 12:28:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713263313 00:00:34.987 12:28:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713263313 00:00:34.987 12:28:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713263313 00:00:34.987 12:28:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713263313 00:00:34.987 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713263313_collect-vmstat.pm.log 00:00:34.987 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713263313_collect-bmc-pm.bmc.pm.log 00:00:34.987 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713263313_collect-cpu-load.pm.log 00:00:34.987 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713263313_collect-cpu-temp.pm.log 00:00:36.357 12:28:34 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:36.357 12:28:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:36.357 12:28:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:36.357 12:28:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.357 12:28:34 -- spdk/autobuild.sh@16 -- $ date -u 00:00:36.357 Tue Apr 16 10:28:34 AM UTC 2024 00:00:36.357 12:28:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:36.357 v24.05-pre-399-g1b4773b8f 00:00:36.357 12:28:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:36.357 12:28:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:36.357 12:28:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:36.357 12:28:35 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:36.358 12:28:35 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:36.358 12:28:35 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.358 ************************************ 00:00:36.358 START TEST ubsan 00:00:36.358 ************************************ 00:00:36.358 12:28:35 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:36.358 using ubsan 00:00:36.358 00:00:36.358 real 0m0.000s 00:00:36.358 user 0m0.000s 00:00:36.358 sys 0m0.000s 00:00:36.358 12:28:35 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:36.358 12:28:35 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.358 ************************************ 00:00:36.358 END TEST ubsan 00:00:36.358 ************************************ 00:00:36.358 12:28:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:36.358 12:28:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:36.358 12:28:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:36.358 12:28:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:36.358 12:28:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:36.358 12:28:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:36.358 12:28:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:36.358 12:28:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:36.358 12:28:35 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:36.358 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:36.358 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:36.616 Using 'verbs' RDMA provider 00:00:47.146 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:57.134 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:57.134 Creating mk/config.mk...done. 00:00:57.134 Creating mk/cc.flags.mk...done. 00:00:57.134 Type 'make' to build. 00:00:57.134 12:28:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:57.134 12:28:55 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:57.134 12:28:55 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:57.134 12:28:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.134 ************************************ 00:00:57.134 START TEST make 00:00:57.134 ************************************ 00:00:57.134 12:28:55 -- common/autotest_common.sh@1111 -- $ make -j48 00:00:57.134 make[1]: Nothing to be done for 'all'. 00:00:58.522 The Meson build system 00:00:58.522 Version: 1.3.1 00:00:58.522 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:58.522 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:58.522 Build type: native build 00:00:58.522 Project name: libvfio-user 00:00:58.522 Project version: 0.0.1 00:00:58.522 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:58.522 C linker for the host machine: cc ld.bfd 2.39-16 00:00:58.522 Host machine cpu family: x86_64 00:00:58.522 Host machine cpu: x86_64 00:00:58.522 Run-time dependency threads found: YES 00:00:58.522 Library dl found: YES 00:00:58.522 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:58.522 Run-time dependency json-c found: YES 0.17 00:00:58.522 Run-time dependency cmocka found: YES 1.1.7 00:00:58.523 Program pytest-3 found: NO 00:00:58.523 Program flake8 found: NO 00:00:58.523 Program misspell-fixer found: NO 00:00:58.523 Program restructuredtext-lint found: NO 00:00:58.523 Program valgrind found: YES (/usr/bin/valgrind) 00:00:58.523 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:58.523 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:58.523 Compiler for C supports arguments -Wwrite-strings: YES 00:00:58.523 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:58.523 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:58.523 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:58.523 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:58.523 Build targets in project: 8 00:00:58.523 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:58.523 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:58.523 00:00:58.523 libvfio-user 0.0.1 00:00:58.523 00:00:58.523 User defined options 00:00:58.523 buildtype : debug 00:00:58.523 default_library: shared 00:00:58.523 libdir : /usr/local/lib 00:00:58.523 00:00:58.523 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:59.469 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:59.469 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:59.469 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:59.469 [3/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:59.469 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:59.734 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:59.734 [6/37] Compiling C object samples/null.p/null.c.o 00:00:59.734 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:59.734 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:59.734 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:59.734 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:59.734 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:59.734 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:59.734 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:59.734 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:59.734 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:59.734 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:59.734 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:59.734 [18/37] Compiling C object samples/client.p/client.c.o 00:00:59.734 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:59.734 [20/37] Compiling C object samples/server.p/server.c.o 00:00:59.734 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:59.734 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:59.734 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:59.734 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:59.734 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:59.734 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:59.734 [27/37] Linking target samples/client 00:00:59.999 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:59.999 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:59.999 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:00:59.999 [31/37] Linking target test/unit_tests 00:01:00.258 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:00.258 [33/37] Linking target samples/server 00:01:00.258 [34/37] Linking target samples/null 00:01:00.258 [35/37] Linking target samples/gpio-pci-idio-16 00:01:00.258 [36/37] Linking target samples/lspci 00:01:00.258 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:00.258 INFO: autodetecting backend as ninja 00:01:00.258 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.519 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.098 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:01.098 ninja: no work to do. 00:01:06.394 The Meson build system 00:01:06.394 Version: 1.3.1 00:01:06.394 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:06.394 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:06.394 Build type: native build 00:01:06.394 Program cat found: YES (/usr/bin/cat) 00:01:06.394 Project name: DPDK 00:01:06.394 Project version: 24.03.0 00:01:06.394 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.394 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.394 Host machine cpu family: x86_64 00:01:06.394 Host machine cpu: x86_64 00:01:06.394 Message: ## Building in Developer Mode ## 00:01:06.394 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.394 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:06.394 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.394 Program python3 found: YES (/usr/bin/python3) 00:01:06.394 Program cat found: YES (/usr/bin/cat) 00:01:06.394 Compiler for C supports arguments -march=native: YES 00:01:06.394 Checking for size of "void *" : 8 00:01:06.394 Checking for size of "void *" : 8 (cached) 00:01:06.394 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:06.394 Library m found: YES 00:01:06.394 Library numa found: YES 00:01:06.394 Has header "numaif.h" : YES 00:01:06.394 Library fdt found: NO 00:01:06.394 Library execinfo found: NO 00:01:06.394 Has header "execinfo.h" : YES 00:01:06.394 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.394 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.394 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.394 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.394 Run-time dependency openssl found: YES 3.0.9 00:01:06.394 Run-time dependency libpcap found: YES 1.10.4 00:01:06.394 Has header "pcap.h" with dependency libpcap: YES 00:01:06.394 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.394 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.394 Compiler for C supports arguments -Wformat: YES 00:01:06.394 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.394 Compiler for C supports arguments -Wformat-security: NO 00:01:06.394 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.394 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.394 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.394 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.394 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.394 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.394 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.394 Compiler for C supports arguments -Wundef: YES 00:01:06.394 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.394 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.394 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.394 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.394 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.394 Program objdump found: YES (/usr/bin/objdump) 00:01:06.394 Compiler for C supports arguments -mavx512f: YES 00:01:06.394 Checking if "AVX512 checking" compiles: YES 00:01:06.394 Fetching value of define "__SSE4_2__" : 1 00:01:06.394 Fetching value of define "__AES__" : 1 00:01:06.394 Fetching value of define "__AVX__" : 1 00:01:06.394 Fetching value of define "__AVX2__" : (undefined) 00:01:06.394 Fetching value of define "__AVX512BW__" : (undefined) 00:01:06.394 Fetching value of define "__AVX512CD__" : (undefined) 00:01:06.394 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:06.394 Fetching value of define "__AVX512F__" : (undefined) 00:01:06.394 Fetching value of define "__AVX512VL__" : (undefined) 00:01:06.394 Fetching value of define "__PCLMUL__" : 1 00:01:06.394 Fetching value of define "__RDRND__" : 1 00:01:06.394 Fetching value of define "__RDSEED__" : (undefined) 00:01:06.394 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.394 Fetching value of define "__znver1__" : (undefined) 00:01:06.394 Fetching value of define "__znver2__" : (undefined) 00:01:06.394 Fetching value of define "__znver3__" : (undefined) 00:01:06.394 Fetching value of define "__znver4__" : (undefined) 00:01:06.394 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.394 Message: lib/log: Defining dependency "log" 00:01:06.394 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.394 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.394 Checking for function "getentropy" : NO 00:01:06.394 Message: lib/eal: Defining dependency "eal" 00:01:06.394 Message: lib/ring: Defining dependency "ring" 00:01:06.394 Message: lib/rcu: Defining dependency "rcu" 00:01:06.394 Message: lib/mempool: Defining dependency "mempool" 00:01:06.394 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.394 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.394 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.394 Compiler for C supports arguments -mpclmul: YES 00:01:06.394 Compiler for C supports arguments -maes: YES 00:01:06.394 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.394 Compiler for C supports arguments -mavx512bw: YES 00:01:06.394 Compiler for C supports arguments -mavx512dq: YES 00:01:06.394 Compiler for C supports arguments -mavx512vl: YES 00:01:06.394 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.394 Compiler for C supports arguments -mavx2: YES 00:01:06.394 Compiler for C supports arguments -mavx: YES 00:01:06.394 Message: lib/net: Defining dependency "net" 00:01:06.394 Message: lib/meter: Defining dependency "meter" 00:01:06.394 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.394 Message: lib/pci: Defining dependency "pci" 00:01:06.394 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.394 Message: lib/hash: Defining dependency "hash" 00:01:06.394 Message: lib/timer: Defining dependency "timer" 00:01:06.394 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.394 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.394 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.394 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.394 Message: lib/power: Defining dependency "power" 00:01:06.394 Message: lib/reorder: Defining dependency "reorder" 00:01:06.394 Message: lib/security: Defining dependency "security" 00:01:06.394 Has header "linux/userfaultfd.h" : YES 00:01:06.395 Has header "linux/vduse.h" : YES 00:01:06.395 Message: lib/vhost: Defining dependency "vhost" 00:01:06.395 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.395 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.395 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.395 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.395 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:06.395 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:06.395 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:06.395 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:06.395 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:06.395 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:06.395 Program doxygen found: YES (/usr/bin/doxygen) 00:01:06.395 Configuring doxy-api-html.conf using configuration 00:01:06.395 Configuring doxy-api-man.conf using configuration 00:01:06.395 Program mandb found: YES (/usr/bin/mandb) 00:01:06.395 Program sphinx-build found: NO 00:01:06.395 Configuring rte_build_config.h using configuration 00:01:06.395 Message: 00:01:06.395 ================= 00:01:06.395 Applications Enabled 00:01:06.395 ================= 00:01:06.395 00:01:06.395 apps: 00:01:06.395 00:01:06.395 00:01:06.395 Message: 00:01:06.395 ================= 00:01:06.395 Libraries Enabled 00:01:06.395 ================= 00:01:06.395 00:01:06.395 libs: 00:01:06.395 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:06.395 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:06.395 cryptodev, dmadev, power, reorder, security, vhost, 00:01:06.395 00:01:06.395 Message: 00:01:06.395 =============== 00:01:06.395 Drivers Enabled 00:01:06.395 =============== 00:01:06.395 00:01:06.395 common: 00:01:06.395 00:01:06.395 bus: 00:01:06.395 pci, vdev, 00:01:06.395 mempool: 00:01:06.395 ring, 00:01:06.395 dma: 00:01:06.395 00:01:06.395 net: 00:01:06.395 00:01:06.395 crypto: 00:01:06.395 00:01:06.395 compress: 00:01:06.395 00:01:06.395 vdpa: 00:01:06.395 00:01:06.395 00:01:06.395 Message: 00:01:06.395 ================= 00:01:06.395 Content Skipped 00:01:06.395 ================= 00:01:06.395 00:01:06.395 apps: 00:01:06.395 dumpcap: explicitly disabled via build config 00:01:06.395 graph: explicitly disabled via build config 00:01:06.395 pdump: explicitly disabled via build config 00:01:06.395 proc-info: explicitly disabled via build config 00:01:06.395 test-acl: explicitly disabled via build config 00:01:06.395 test-bbdev: explicitly disabled via build config 00:01:06.395 test-cmdline: explicitly disabled via build config 00:01:06.395 test-compress-perf: explicitly disabled via build config 00:01:06.395 test-crypto-perf: explicitly disabled via build config 00:01:06.395 test-dma-perf: explicitly disabled via build config 00:01:06.395 test-eventdev: explicitly disabled via build config 00:01:06.395 test-fib: explicitly disabled via build config 00:01:06.395 test-flow-perf: explicitly disabled via build config 00:01:06.395 test-gpudev: explicitly disabled via build config 00:01:06.395 test-mldev: explicitly disabled via build config 00:01:06.395 test-pipeline: explicitly disabled via build config 00:01:06.395 test-pmd: explicitly disabled via build config 00:01:06.395 test-regex: explicitly disabled via build config 00:01:06.395 test-sad: explicitly disabled via build config 00:01:06.395 test-security-perf: explicitly disabled via build config 00:01:06.395 00:01:06.395 libs: 00:01:06.395 argparse: explicitly disabled via build config 00:01:06.395 metrics: explicitly disabled via build config 00:01:06.395 acl: explicitly disabled via build config 00:01:06.395 bbdev: explicitly disabled via build config 00:01:06.395 bitratestats: explicitly disabled via build config 00:01:06.395 bpf: explicitly disabled via build config 00:01:06.395 cfgfile: explicitly disabled via build config 00:01:06.395 distributor: explicitly disabled via build config 00:01:06.395 efd: explicitly disabled via build config 00:01:06.395 eventdev: explicitly disabled via build config 00:01:06.395 dispatcher: explicitly disabled via build config 00:01:06.395 gpudev: explicitly disabled via build config 00:01:06.395 gro: explicitly disabled via build config 00:01:06.395 gso: explicitly disabled via build config 00:01:06.395 ip_frag: explicitly disabled via build config 00:01:06.395 jobstats: explicitly disabled via build config 00:01:06.395 latencystats: explicitly disabled via build config 00:01:06.395 lpm: explicitly disabled via build config 00:01:06.395 member: explicitly disabled via build config 00:01:06.395 pcapng: explicitly disabled via build config 00:01:06.395 rawdev: explicitly disabled via build config 00:01:06.395 regexdev: explicitly disabled via build config 00:01:06.395 mldev: explicitly disabled via build config 00:01:06.395 rib: explicitly disabled via build config 00:01:06.395 sched: explicitly disabled via build config 00:01:06.395 stack: explicitly disabled via build config 00:01:06.395 ipsec: explicitly disabled via build config 00:01:06.395 pdcp: explicitly disabled via build config 00:01:06.395 fib: explicitly disabled via build config 00:01:06.395 port: explicitly disabled via build config 00:01:06.395 pdump: explicitly disabled via build config 00:01:06.395 table: explicitly disabled via build config 00:01:06.395 pipeline: explicitly disabled via build config 00:01:06.395 graph: explicitly disabled via build config 00:01:06.395 node: explicitly disabled via build config 00:01:06.395 00:01:06.395 drivers: 00:01:06.395 common/cpt: not in enabled drivers build config 00:01:06.395 common/dpaax: not in enabled drivers build config 00:01:06.395 common/iavf: not in enabled drivers build config 00:01:06.395 common/idpf: not in enabled drivers build config 00:01:06.395 common/ionic: not in enabled drivers build config 00:01:06.395 common/mvep: not in enabled drivers build config 00:01:06.395 common/octeontx: not in enabled drivers build config 00:01:06.395 bus/auxiliary: not in enabled drivers build config 00:01:06.395 bus/cdx: not in enabled drivers build config 00:01:06.395 bus/dpaa: not in enabled drivers build config 00:01:06.395 bus/fslmc: not in enabled drivers build config 00:01:06.395 bus/ifpga: not in enabled drivers build config 00:01:06.395 bus/platform: not in enabled drivers build config 00:01:06.395 bus/uacce: not in enabled drivers build config 00:01:06.395 bus/vmbus: not in enabled drivers build config 00:01:06.395 common/cnxk: not in enabled drivers build config 00:01:06.395 common/mlx5: not in enabled drivers build config 00:01:06.395 common/nfp: not in enabled drivers build config 00:01:06.395 common/nitrox: not in enabled drivers build config 00:01:06.395 common/qat: not in enabled drivers build config 00:01:06.395 common/sfc_efx: not in enabled drivers build config 00:01:06.395 mempool/bucket: not in enabled drivers build config 00:01:06.395 mempool/cnxk: not in enabled drivers build config 00:01:06.395 mempool/dpaa: not in enabled drivers build config 00:01:06.395 mempool/dpaa2: not in enabled drivers build config 00:01:06.395 mempool/octeontx: not in enabled drivers build config 00:01:06.395 mempool/stack: not in enabled drivers build config 00:01:06.395 dma/cnxk: not in enabled drivers build config 00:01:06.395 dma/dpaa: not in enabled drivers build config 00:01:06.395 dma/dpaa2: not in enabled drivers build config 00:01:06.395 dma/hisilicon: not in enabled drivers build config 00:01:06.395 dma/idxd: not in enabled drivers build config 00:01:06.395 dma/ioat: not in enabled drivers build config 00:01:06.395 dma/skeleton: not in enabled drivers build config 00:01:06.395 net/af_packet: not in enabled drivers build config 00:01:06.395 net/af_xdp: not in enabled drivers build config 00:01:06.395 net/ark: not in enabled drivers build config 00:01:06.395 net/atlantic: not in enabled drivers build config 00:01:06.395 net/avp: not in enabled drivers build config 00:01:06.395 net/axgbe: not in enabled drivers build config 00:01:06.395 net/bnx2x: not in enabled drivers build config 00:01:06.395 net/bnxt: not in enabled drivers build config 00:01:06.395 net/bonding: not in enabled drivers build config 00:01:06.395 net/cnxk: not in enabled drivers build config 00:01:06.395 net/cpfl: not in enabled drivers build config 00:01:06.395 net/cxgbe: not in enabled drivers build config 00:01:06.395 net/dpaa: not in enabled drivers build config 00:01:06.395 net/dpaa2: not in enabled drivers build config 00:01:06.395 net/e1000: not in enabled drivers build config 00:01:06.395 net/ena: not in enabled drivers build config 00:01:06.395 net/enetc: not in enabled drivers build config 00:01:06.395 net/enetfec: not in enabled drivers build config 00:01:06.395 net/enic: not in enabled drivers build config 00:01:06.395 net/failsafe: not in enabled drivers build config 00:01:06.395 net/fm10k: not in enabled drivers build config 00:01:06.395 net/gve: not in enabled drivers build config 00:01:06.395 net/hinic: not in enabled drivers build config 00:01:06.395 net/hns3: not in enabled drivers build config 00:01:06.395 net/i40e: not in enabled drivers build config 00:01:06.395 net/iavf: not in enabled drivers build config 00:01:06.395 net/ice: not in enabled drivers build config 00:01:06.395 net/idpf: not in enabled drivers build config 00:01:06.395 net/igc: not in enabled drivers build config 00:01:06.395 net/ionic: not in enabled drivers build config 00:01:06.395 net/ipn3ke: not in enabled drivers build config 00:01:06.395 net/ixgbe: not in enabled drivers build config 00:01:06.395 net/mana: not in enabled drivers build config 00:01:06.395 net/memif: not in enabled drivers build config 00:01:06.395 net/mlx4: not in enabled drivers build config 00:01:06.395 net/mlx5: not in enabled drivers build config 00:01:06.395 net/mvneta: not in enabled drivers build config 00:01:06.395 net/mvpp2: not in enabled drivers build config 00:01:06.395 net/netvsc: not in enabled drivers build config 00:01:06.395 net/nfb: not in enabled drivers build config 00:01:06.395 net/nfp: not in enabled drivers build config 00:01:06.395 net/ngbe: not in enabled drivers build config 00:01:06.396 net/null: not in enabled drivers build config 00:01:06.396 net/octeontx: not in enabled drivers build config 00:01:06.396 net/octeon_ep: not in enabled drivers build config 00:01:06.396 net/pcap: not in enabled drivers build config 00:01:06.396 net/pfe: not in enabled drivers build config 00:01:06.396 net/qede: not in enabled drivers build config 00:01:06.396 net/ring: not in enabled drivers build config 00:01:06.396 net/sfc: not in enabled drivers build config 00:01:06.396 net/softnic: not in enabled drivers build config 00:01:06.396 net/tap: not in enabled drivers build config 00:01:06.396 net/thunderx: not in enabled drivers build config 00:01:06.396 net/txgbe: not in enabled drivers build config 00:01:06.396 net/vdev_netvsc: not in enabled drivers build config 00:01:06.396 net/vhost: not in enabled drivers build config 00:01:06.396 net/virtio: not in enabled drivers build config 00:01:06.396 net/vmxnet3: not in enabled drivers build config 00:01:06.396 raw/*: missing internal dependency, "rawdev" 00:01:06.396 crypto/armv8: not in enabled drivers build config 00:01:06.396 crypto/bcmfs: not in enabled drivers build config 00:01:06.396 crypto/caam_jr: not in enabled drivers build config 00:01:06.396 crypto/ccp: not in enabled drivers build config 00:01:06.396 crypto/cnxk: not in enabled drivers build config 00:01:06.396 crypto/dpaa_sec: not in enabled drivers build config 00:01:06.396 crypto/dpaa2_sec: not in enabled drivers build config 00:01:06.396 crypto/ipsec_mb: not in enabled drivers build config 00:01:06.396 crypto/mlx5: not in enabled drivers build config 00:01:06.396 crypto/mvsam: not in enabled drivers build config 00:01:06.396 crypto/nitrox: not in enabled drivers build config 00:01:06.396 crypto/null: not in enabled drivers build config 00:01:06.396 crypto/octeontx: not in enabled drivers build config 00:01:06.396 crypto/openssl: not in enabled drivers build config 00:01:06.396 crypto/scheduler: not in enabled drivers build config 00:01:06.396 crypto/uadk: not in enabled drivers build config 00:01:06.396 crypto/virtio: not in enabled drivers build config 00:01:06.396 compress/isal: not in enabled drivers build config 00:01:06.396 compress/mlx5: not in enabled drivers build config 00:01:06.396 compress/nitrox: not in enabled drivers build config 00:01:06.396 compress/octeontx: not in enabled drivers build config 00:01:06.396 compress/zlib: not in enabled drivers build config 00:01:06.396 regex/*: missing internal dependency, "regexdev" 00:01:06.396 ml/*: missing internal dependency, "mldev" 00:01:06.396 vdpa/ifc: not in enabled drivers build config 00:01:06.396 vdpa/mlx5: not in enabled drivers build config 00:01:06.396 vdpa/nfp: not in enabled drivers build config 00:01:06.396 vdpa/sfc: not in enabled drivers build config 00:01:06.396 event/*: missing internal dependency, "eventdev" 00:01:06.396 baseband/*: missing internal dependency, "bbdev" 00:01:06.396 gpu/*: missing internal dependency, "gpudev" 00:01:06.396 00:01:06.396 00:01:06.396 Build targets in project: 85 00:01:06.396 00:01:06.396 DPDK 24.03.0 00:01:06.396 00:01:06.396 User defined options 00:01:06.396 buildtype : debug 00:01:06.396 default_library : shared 00:01:06.396 libdir : lib 00:01:06.396 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.396 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:06.396 c_link_args : 00:01:06.396 cpu_instruction_set: native 00:01:06.396 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:06.396 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib,argparse 00:01:06.396 enable_docs : false 00:01:06.396 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:06.396 enable_kmods : false 00:01:06.396 tests : false 00:01:06.396 00:01:06.396 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.396 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:06.396 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:06.396 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:06.396 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:06.396 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:06.396 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:06.396 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:06.396 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:06.396 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:06.396 [9/268] Linking static target lib/librte_kvargs.a 00:01:06.396 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:06.396 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:06.396 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:06.396 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:06.396 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:06.396 [15/268] Linking static target lib/librte_log.a 00:01:06.660 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:07.240 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.240 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.240 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:07.240 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:07.240 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:07.240 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:07.240 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:07.240 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:07.240 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:07.240 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:07.240 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:07.240 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:07.240 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:07.240 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:07.240 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:07.240 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:07.240 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:07.240 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:07.240 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.240 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:07.240 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:07.240 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:07.240 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:07.240 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:07.240 [41/268] Linking static target lib/librte_telemetry.a 00:01:07.240 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:07.240 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:07.240 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:07.240 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:07.240 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:07.240 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:07.240 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:07.240 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:07.240 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:07.240 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:07.240 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:07.240 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:07.508 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:07.508 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:07.508 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:07.508 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:07.508 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:07.508 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:07.508 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:07.508 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:07.508 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:07.508 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:07.772 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:07.772 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:07.772 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.772 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:07.772 [68/268] Linking static target lib/librte_pci.a 00:01:07.772 [69/268] Linking target lib/librte_log.so.24.1 00:01:08.039 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:08.039 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:08.039 [72/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:08.039 [73/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:08.039 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:08.039 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:08.039 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:08.039 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:08.297 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:08.297 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:08.297 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:08.297 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:08.297 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:08.297 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:08.297 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:08.297 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:08.297 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:08.297 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:08.297 [88/268] Linking static target lib/librte_ring.a 00:01:08.297 [89/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:08.297 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:08.297 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:08.297 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:08.298 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:08.298 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:08.298 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:08.298 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:08.298 [97/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:08.298 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:08.298 [99/268] Linking target lib/librte_kvargs.so.24.1 00:01:08.298 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:08.298 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:08.298 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:08.298 [103/268] Linking static target lib/librte_meter.a 00:01:08.298 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:08.298 [105/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.298 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:08.298 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:08.566 [108/268] Linking target lib/librte_telemetry.so.24.1 00:01:08.566 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:08.566 [110/268] Linking static target lib/librte_mempool.a 00:01:08.566 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:08.566 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:08.566 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:08.566 [114/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.566 [115/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:08.566 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:08.566 [117/268] Linking static target lib/librte_rcu.a 00:01:08.566 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:08.566 [119/268] Linking static target lib/librte_eal.a 00:01:08.566 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:08.566 [121/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:08.566 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:08.566 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:08.566 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:08.566 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:08.566 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:08.566 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:08.828 [128/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:08.828 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:08.828 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:08.828 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:08.828 [132/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:08.828 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:08.828 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.828 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:08.828 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:08.828 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.828 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:09.089 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:09.089 [140/268] Linking static target lib/librte_net.a 00:01:09.089 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:09.089 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:09.089 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:09.089 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:09.089 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.089 [146/268] Linking static target lib/librte_cmdline.a 00:01:09.089 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:09.089 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:09.348 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:09.348 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:09.348 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:09.348 [152/268] Linking static target lib/librte_timer.a 00:01:09.348 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:09.348 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:09.348 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:09.348 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:09.348 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.348 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:09.348 [159/268] Linking static target lib/librte_dmadev.a 00:01:09.608 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:09.608 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:09.608 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:09.608 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:09.608 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:09.608 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.608 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:09.608 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:09.608 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:09.608 [169/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.608 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:09.866 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:09.866 [172/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:09.866 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:09.866 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:09.866 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:09.866 [176/268] Linking static target lib/librte_hash.a 00:01:09.866 [177/268] Linking static target lib/librte_compressdev.a 00:01:09.866 [178/268] Linking static target lib/librte_power.a 00:01:09.866 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:09.866 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:09.866 [181/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:09.866 [182/268] Linking static target lib/librte_mbuf.a 00:01:09.866 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.866 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:09.866 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:09.866 [186/268] Linking static target lib/librte_reorder.a 00:01:09.866 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:09.866 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:09.866 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:10.123 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:10.123 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:10.123 [192/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:10.123 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:10.124 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.124 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:10.124 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:10.124 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:10.124 [198/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.124 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:10.382 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.382 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.382 [202/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.382 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:10.382 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:10.382 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:10.382 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:10.382 [207/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:10.382 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:10.382 [209/268] Linking static target lib/librte_security.a 00:01:10.382 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.382 [211/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.382 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:10.382 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:10.382 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.382 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:10.382 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.640 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:10.640 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:10.640 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:10.640 [220/268] Linking static target drivers/librte_mempool_ring.a 00:01:10.640 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.640 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:10.640 [223/268] Linking static target lib/librte_cryptodev.a 00:01:10.640 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.640 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:10.640 [226/268] Linking static target lib/librte_ethdev.a 00:01:12.012 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.946 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:14.876 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.876 [230/268] Linking target lib/librte_eal.so.24.1 00:01:14.876 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:14.876 [232/268] Linking target lib/librte_ring.so.24.1 00:01:14.876 [233/268] Linking target lib/librte_pci.so.24.1 00:01:14.876 [234/268] Linking target lib/librte_dmadev.so.24.1 00:01:14.876 [235/268] Linking target lib/librte_timer.so.24.1 00:01:14.876 [236/268] Linking target lib/librte_meter.so.24.1 00:01:14.876 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:14.876 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.134 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:15.134 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:15.134 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:15.134 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:15.134 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:15.134 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:15.134 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:15.134 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:15.392 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:15.392 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:15.392 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:15.392 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:15.392 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:15.392 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:15.392 [253/268] Linking target lib/librte_net.so.24.1 00:01:15.392 [254/268] Linking target lib/librte_compressdev.so.24.1 00:01:15.392 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:15.650 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:15.650 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:15.651 [258/268] Linking target lib/librte_security.so.24.1 00:01:15.651 [259/268] Linking target lib/librte_hash.so.24.1 00:01:15.651 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:15.651 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:15.651 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:15.909 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:15.909 [264/268] Linking target lib/librte_power.so.24.1 00:01:18.435 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:18.435 [266/268] Linking static target lib/librte_vhost.a 00:01:19.369 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.369 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:19.369 INFO: autodetecting backend as ninja 00:01:19.369 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:20.304 CC lib/log/log.o 00:01:20.304 CC lib/log/log_flags.o 00:01:20.304 CC lib/log/log_deprecated.o 00:01:20.304 CC lib/ut/ut.o 00:01:20.304 CC lib/ut_mock/mock.o 00:01:20.304 LIB libspdk_ut_mock.a 00:01:20.304 LIB libspdk_log.a 00:01:20.304 SO libspdk_ut_mock.so.6.0 00:01:20.304 LIB libspdk_ut.a 00:01:20.304 SO libspdk_log.so.7.0 00:01:20.304 SO libspdk_ut.so.2.0 00:01:20.304 SYMLINK libspdk_ut_mock.so 00:01:20.563 SYMLINK libspdk_ut.so 00:01:20.563 SYMLINK libspdk_log.so 00:01:20.563 CC lib/ioat/ioat.o 00:01:20.563 CXX lib/trace_parser/trace.o 00:01:20.563 CC lib/dma/dma.o 00:01:20.563 CC lib/util/base64.o 00:01:20.563 CC lib/util/bit_array.o 00:01:20.563 CC lib/util/cpuset.o 00:01:20.563 CC lib/util/crc16.o 00:01:20.563 CC lib/util/crc32.o 00:01:20.563 CC lib/util/crc32c.o 00:01:20.563 CC lib/util/crc32_ieee.o 00:01:20.563 CC lib/util/crc64.o 00:01:20.563 CC lib/util/dif.o 00:01:20.563 CC lib/util/fd.o 00:01:20.563 CC lib/util/file.o 00:01:20.563 CC lib/util/hexlify.o 00:01:20.563 CC lib/util/iov.o 00:01:20.563 CC lib/util/math.o 00:01:20.563 CC lib/util/pipe.o 00:01:20.563 CC lib/util/strerror_tls.o 00:01:20.563 CC lib/util/string.o 00:01:20.563 CC lib/util/uuid.o 00:01:20.563 CC lib/util/fd_group.o 00:01:20.563 CC lib/util/zipf.o 00:01:20.563 CC lib/util/xor.o 00:01:20.821 CC lib/vfio_user/host/vfio_user_pci.o 00:01:20.821 CC lib/vfio_user/host/vfio_user.o 00:01:20.821 LIB libspdk_dma.a 00:01:20.821 SO libspdk_dma.so.4.0 00:01:20.821 SYMLINK libspdk_dma.so 00:01:20.821 LIB libspdk_ioat.a 00:01:20.821 SO libspdk_ioat.so.7.0 00:01:21.079 SYMLINK libspdk_ioat.so 00:01:21.079 LIB libspdk_vfio_user.a 00:01:21.079 SO libspdk_vfio_user.so.5.0 00:01:21.079 SYMLINK libspdk_vfio_user.so 00:01:21.079 LIB libspdk_util.a 00:01:21.337 SO libspdk_util.so.9.0 00:01:21.337 SYMLINK libspdk_util.so 00:01:21.595 CC lib/rdma/common.o 00:01:21.595 CC lib/vmd/vmd.o 00:01:21.595 CC lib/json/json_parse.o 00:01:21.595 CC lib/idxd/idxd.o 00:01:21.595 CC lib/env_dpdk/env.o 00:01:21.595 CC lib/json/json_util.o 00:01:21.595 CC lib/conf/conf.o 00:01:21.595 CC lib/rdma/rdma_verbs.o 00:01:21.595 CC lib/vmd/led.o 00:01:21.595 CC lib/idxd/idxd_user.o 00:01:21.595 CC lib/json/json_write.o 00:01:21.595 CC lib/env_dpdk/memory.o 00:01:21.595 CC lib/env_dpdk/pci.o 00:01:21.595 CC lib/env_dpdk/init.o 00:01:21.595 CC lib/env_dpdk/threads.o 00:01:21.595 CC lib/env_dpdk/pci_ioat.o 00:01:21.595 CC lib/env_dpdk/pci_virtio.o 00:01:21.595 CC lib/env_dpdk/pci_vmd.o 00:01:21.595 CC lib/env_dpdk/pci_idxd.o 00:01:21.595 CC lib/env_dpdk/pci_event.o 00:01:21.595 CC lib/env_dpdk/sigbus_handler.o 00:01:21.595 CC lib/env_dpdk/pci_dpdk.o 00:01:21.595 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:21.595 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:21.853 LIB libspdk_trace_parser.a 00:01:21.853 SO libspdk_trace_parser.so.5.0 00:01:21.853 LIB libspdk_conf.a 00:01:21.853 SO libspdk_conf.so.6.0 00:01:21.853 SYMLINK libspdk_trace_parser.so 00:01:21.853 LIB libspdk_rdma.a 00:01:21.853 LIB libspdk_json.a 00:01:21.853 SYMLINK libspdk_conf.so 00:01:21.853 SO libspdk_rdma.so.6.0 00:01:21.853 SO libspdk_json.so.6.0 00:01:21.853 SYMLINK libspdk_rdma.so 00:01:21.853 SYMLINK libspdk_json.so 00:01:22.112 CC lib/jsonrpc/jsonrpc_server.o 00:01:22.112 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:22.112 CC lib/jsonrpc/jsonrpc_client.o 00:01:22.112 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:22.112 LIB libspdk_idxd.a 00:01:22.112 SO libspdk_idxd.so.12.0 00:01:22.112 SYMLINK libspdk_idxd.so 00:01:22.112 LIB libspdk_vmd.a 00:01:22.370 SO libspdk_vmd.so.6.0 00:01:22.370 SYMLINK libspdk_vmd.so 00:01:22.370 LIB libspdk_jsonrpc.a 00:01:22.370 SO libspdk_jsonrpc.so.6.0 00:01:22.370 SYMLINK libspdk_jsonrpc.so 00:01:22.628 CC lib/rpc/rpc.o 00:01:22.887 LIB libspdk_rpc.a 00:01:22.887 SO libspdk_rpc.so.6.0 00:01:22.887 SYMLINK libspdk_rpc.so 00:01:23.145 CC lib/keyring/keyring.o 00:01:23.145 CC lib/trace/trace.o 00:01:23.145 CC lib/keyring/keyring_rpc.o 00:01:23.145 CC lib/trace/trace_flags.o 00:01:23.145 CC lib/trace/trace_rpc.o 00:01:23.145 CC lib/notify/notify.o 00:01:23.145 CC lib/notify/notify_rpc.o 00:01:23.145 LIB libspdk_notify.a 00:01:23.404 SO libspdk_notify.so.6.0 00:01:23.404 LIB libspdk_keyring.a 00:01:23.404 LIB libspdk_trace.a 00:01:23.404 SYMLINK libspdk_notify.so 00:01:23.404 SO libspdk_keyring.so.1.0 00:01:23.404 SO libspdk_trace.so.10.0 00:01:23.404 SYMLINK libspdk_keyring.so 00:01:23.404 SYMLINK libspdk_trace.so 00:01:23.662 LIB libspdk_env_dpdk.a 00:01:23.662 CC lib/sock/sock.o 00:01:23.662 CC lib/sock/sock_rpc.o 00:01:23.662 CC lib/thread/thread.o 00:01:23.662 CC lib/thread/iobuf.o 00:01:23.662 SO libspdk_env_dpdk.so.14.0 00:01:23.921 SYMLINK libspdk_env_dpdk.so 00:01:23.921 LIB libspdk_sock.a 00:01:23.921 SO libspdk_sock.so.9.0 00:01:24.180 SYMLINK libspdk_sock.so 00:01:24.180 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:24.180 CC lib/nvme/nvme_ctrlr.o 00:01:24.180 CC lib/nvme/nvme_fabric.o 00:01:24.180 CC lib/nvme/nvme_ns_cmd.o 00:01:24.180 CC lib/nvme/nvme_ns.o 00:01:24.180 CC lib/nvme/nvme_pcie_common.o 00:01:24.180 CC lib/nvme/nvme_pcie.o 00:01:24.180 CC lib/nvme/nvme_qpair.o 00:01:24.180 CC lib/nvme/nvme.o 00:01:24.180 CC lib/nvme/nvme_quirks.o 00:01:24.180 CC lib/nvme/nvme_transport.o 00:01:24.180 CC lib/nvme/nvme_discovery.o 00:01:24.180 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:24.180 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:24.180 CC lib/nvme/nvme_tcp.o 00:01:24.180 CC lib/nvme/nvme_opal.o 00:01:24.180 CC lib/nvme/nvme_io_msg.o 00:01:24.180 CC lib/nvme/nvme_poll_group.o 00:01:24.180 CC lib/nvme/nvme_zns.o 00:01:24.180 CC lib/nvme/nvme_stubs.o 00:01:24.180 CC lib/nvme/nvme_auth.o 00:01:24.180 CC lib/nvme/nvme_cuse.o 00:01:24.180 CC lib/nvme/nvme_vfio_user.o 00:01:24.180 CC lib/nvme/nvme_rdma.o 00:01:25.116 LIB libspdk_thread.a 00:01:25.116 SO libspdk_thread.so.10.0 00:01:25.374 SYMLINK libspdk_thread.so 00:01:25.374 CC lib/blob/blobstore.o 00:01:25.374 CC lib/init/json_config.o 00:01:25.374 CC lib/virtio/virtio.o 00:01:25.374 CC lib/accel/accel.o 00:01:25.374 CC lib/vfu_tgt/tgt_endpoint.o 00:01:25.374 CC lib/blob/request.o 00:01:25.374 CC lib/init/subsystem.o 00:01:25.374 CC lib/virtio/virtio_vhost_user.o 00:01:25.374 CC lib/vfu_tgt/tgt_rpc.o 00:01:25.374 CC lib/blob/zeroes.o 00:01:25.374 CC lib/accel/accel_rpc.o 00:01:25.374 CC lib/init/subsystem_rpc.o 00:01:25.374 CC lib/virtio/virtio_vfio_user.o 00:01:25.374 CC lib/blob/blob_bs_dev.o 00:01:25.374 CC lib/init/rpc.o 00:01:25.374 CC lib/accel/accel_sw.o 00:01:25.374 CC lib/virtio/virtio_pci.o 00:01:25.643 LIB libspdk_init.a 00:01:25.643 SO libspdk_init.so.5.0 00:01:25.905 LIB libspdk_virtio.a 00:01:25.905 LIB libspdk_vfu_tgt.a 00:01:25.905 SYMLINK libspdk_init.so 00:01:25.905 SO libspdk_vfu_tgt.so.3.0 00:01:25.905 SO libspdk_virtio.so.7.0 00:01:25.905 SYMLINK libspdk_vfu_tgt.so 00:01:25.905 SYMLINK libspdk_virtio.so 00:01:25.905 CC lib/event/app.o 00:01:25.905 CC lib/event/reactor.o 00:01:25.905 CC lib/event/log_rpc.o 00:01:25.905 CC lib/event/app_rpc.o 00:01:25.905 CC lib/event/scheduler_static.o 00:01:26.471 LIB libspdk_event.a 00:01:26.471 SO libspdk_event.so.13.0 00:01:26.471 SYMLINK libspdk_event.so 00:01:26.471 LIB libspdk_accel.a 00:01:26.471 SO libspdk_accel.so.15.0 00:01:26.471 LIB libspdk_nvme.a 00:01:26.471 SYMLINK libspdk_accel.so 00:01:26.728 SO libspdk_nvme.so.13.0 00:01:26.728 CC lib/bdev/bdev.o 00:01:26.728 CC lib/bdev/bdev_rpc.o 00:01:26.728 CC lib/bdev/bdev_zone.o 00:01:26.728 CC lib/bdev/part.o 00:01:26.728 CC lib/bdev/scsi_nvme.o 00:01:26.986 SYMLINK libspdk_nvme.so 00:01:28.360 LIB libspdk_blob.a 00:01:28.360 SO libspdk_blob.so.11.0 00:01:28.360 SYMLINK libspdk_blob.so 00:01:28.619 CC lib/lvol/lvol.o 00:01:28.619 CC lib/blobfs/blobfs.o 00:01:28.619 CC lib/blobfs/tree.o 00:01:29.185 LIB libspdk_bdev.a 00:01:29.443 SO libspdk_bdev.so.15.0 00:01:29.443 LIB libspdk_blobfs.a 00:01:29.443 SO libspdk_blobfs.so.10.0 00:01:29.443 LIB libspdk_lvol.a 00:01:29.443 SYMLINK libspdk_bdev.so 00:01:29.443 SO libspdk_lvol.so.10.0 00:01:29.443 SYMLINK libspdk_blobfs.so 00:01:29.443 SYMLINK libspdk_lvol.so 00:01:29.710 CC lib/nbd/nbd.o 00:01:29.710 CC lib/nbd/nbd_rpc.o 00:01:29.710 CC lib/ftl/ftl_core.o 00:01:29.710 CC lib/ublk/ublk.o 00:01:29.710 CC lib/nvmf/ctrlr.o 00:01:29.710 CC lib/scsi/dev.o 00:01:29.710 CC lib/ftl/ftl_init.o 00:01:29.710 CC lib/scsi/lun.o 00:01:29.710 CC lib/nvmf/ctrlr_discovery.o 00:01:29.710 CC lib/ublk/ublk_rpc.o 00:01:29.710 CC lib/scsi/port.o 00:01:29.710 CC lib/ftl/ftl_layout.o 00:01:29.710 CC lib/nvmf/ctrlr_bdev.o 00:01:29.710 CC lib/scsi/scsi.o 00:01:29.710 CC lib/nvmf/subsystem.o 00:01:29.710 CC lib/ftl/ftl_debug.o 00:01:29.710 CC lib/nvmf/nvmf.o 00:01:29.710 CC lib/scsi/scsi_bdev.o 00:01:29.710 CC lib/ftl/ftl_io.o 00:01:29.710 CC lib/scsi/scsi_pr.o 00:01:29.710 CC lib/nvmf/nvmf_rpc.o 00:01:29.710 CC lib/ftl/ftl_sb.o 00:01:29.710 CC lib/scsi/scsi_rpc.o 00:01:29.710 CC lib/ftl/ftl_l2p.o 00:01:29.710 CC lib/ftl/ftl_l2p_flat.o 00:01:29.710 CC lib/nvmf/transport.o 00:01:29.710 CC lib/scsi/task.o 00:01:29.710 CC lib/nvmf/tcp.o 00:01:29.710 CC lib/ftl/ftl_nv_cache.o 00:01:29.710 CC lib/nvmf/vfio_user.o 00:01:29.710 CC lib/ftl/ftl_band.o 00:01:29.710 CC lib/nvmf/rdma.o 00:01:29.710 CC lib/ftl/ftl_band_ops.o 00:01:29.710 CC lib/ftl/ftl_writer.o 00:01:29.710 CC lib/ftl/ftl_rq.o 00:01:29.710 CC lib/ftl/ftl_l2p_cache.o 00:01:29.710 CC lib/ftl/ftl_reloc.o 00:01:29.710 CC lib/ftl/ftl_p2l.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:29.710 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:29.969 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:29.969 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:29.969 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:29.969 CC lib/ftl/utils/ftl_conf.o 00:01:29.969 CC lib/ftl/utils/ftl_md.o 00:01:29.970 CC lib/ftl/utils/ftl_mempool.o 00:01:29.970 CC lib/ftl/utils/ftl_bitmap.o 00:01:29.970 CC lib/ftl/utils/ftl_property.o 00:01:29.970 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:29.970 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:29.970 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:29.970 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:29.970 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:29.970 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:29.970 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:29.970 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:29.970 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:29.970 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:29.970 CC lib/ftl/base/ftl_base_dev.o 00:01:30.231 CC lib/ftl/base/ftl_base_bdev.o 00:01:30.231 CC lib/ftl/ftl_trace.o 00:01:30.231 LIB libspdk_nbd.a 00:01:30.489 SO libspdk_nbd.so.7.0 00:01:30.489 SYMLINK libspdk_nbd.so 00:01:30.489 LIB libspdk_scsi.a 00:01:30.489 SO libspdk_scsi.so.9.0 00:01:30.489 SYMLINK libspdk_scsi.so 00:01:30.747 LIB libspdk_ublk.a 00:01:30.747 SO libspdk_ublk.so.3.0 00:01:30.747 SYMLINK libspdk_ublk.so 00:01:30.747 CC lib/iscsi/conn.o 00:01:30.747 CC lib/vhost/vhost.o 00:01:30.747 CC lib/iscsi/init_grp.o 00:01:30.747 CC lib/vhost/vhost_rpc.o 00:01:30.747 CC lib/iscsi/iscsi.o 00:01:30.747 CC lib/vhost/vhost_scsi.o 00:01:30.747 CC lib/vhost/vhost_blk.o 00:01:30.747 CC lib/iscsi/md5.o 00:01:30.747 CC lib/vhost/rte_vhost_user.o 00:01:30.747 CC lib/iscsi/param.o 00:01:30.747 CC lib/iscsi/portal_grp.o 00:01:30.747 CC lib/iscsi/tgt_node.o 00:01:30.747 CC lib/iscsi/iscsi_subsystem.o 00:01:30.747 CC lib/iscsi/iscsi_rpc.o 00:01:30.747 CC lib/iscsi/task.o 00:01:31.006 LIB libspdk_ftl.a 00:01:31.006 SO libspdk_ftl.so.9.0 00:01:31.572 SYMLINK libspdk_ftl.so 00:01:32.140 LIB libspdk_vhost.a 00:01:32.140 SO libspdk_vhost.so.8.0 00:01:32.140 LIB libspdk_nvmf.a 00:01:32.140 SYMLINK libspdk_vhost.so 00:01:32.140 LIB libspdk_iscsi.a 00:01:32.140 SO libspdk_nvmf.so.18.0 00:01:32.140 SO libspdk_iscsi.so.8.0 00:01:32.398 SYMLINK libspdk_iscsi.so 00:01:32.398 SYMLINK libspdk_nvmf.so 00:01:32.656 CC module/env_dpdk/env_dpdk_rpc.o 00:01:32.656 CC module/vfu_device/vfu_virtio.o 00:01:32.656 CC module/vfu_device/vfu_virtio_blk.o 00:01:32.656 CC module/vfu_device/vfu_virtio_scsi.o 00:01:32.657 CC module/vfu_device/vfu_virtio_rpc.o 00:01:32.657 CC module/accel/error/accel_error.o 00:01:32.657 CC module/accel/ioat/accel_ioat.o 00:01:32.657 CC module/blob/bdev/blob_bdev.o 00:01:32.657 CC module/accel/iaa/accel_iaa.o 00:01:32.657 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:32.657 CC module/accel/error/accel_error_rpc.o 00:01:32.657 CC module/accel/ioat/accel_ioat_rpc.o 00:01:32.657 CC module/sock/posix/posix.o 00:01:32.657 CC module/accel/iaa/accel_iaa_rpc.o 00:01:32.657 CC module/keyring/file/keyring.o 00:01:32.657 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:32.657 CC module/accel/dsa/accel_dsa.o 00:01:32.657 CC module/keyring/file/keyring_rpc.o 00:01:32.657 CC module/accel/dsa/accel_dsa_rpc.o 00:01:32.657 CC module/scheduler/gscheduler/gscheduler.o 00:01:32.914 LIB libspdk_env_dpdk_rpc.a 00:01:32.914 SO libspdk_env_dpdk_rpc.so.6.0 00:01:32.915 SYMLINK libspdk_env_dpdk_rpc.so 00:01:32.915 LIB libspdk_keyring_file.a 00:01:32.915 LIB libspdk_scheduler_gscheduler.a 00:01:32.915 LIB libspdk_scheduler_dpdk_governor.a 00:01:32.915 SO libspdk_scheduler_gscheduler.so.4.0 00:01:32.915 SO libspdk_keyring_file.so.1.0 00:01:32.915 LIB libspdk_accel_error.a 00:01:32.915 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:32.915 LIB libspdk_accel_ioat.a 00:01:32.915 LIB libspdk_scheduler_dynamic.a 00:01:32.915 LIB libspdk_accel_iaa.a 00:01:32.915 SO libspdk_accel_error.so.2.0 00:01:32.915 SO libspdk_scheduler_dynamic.so.4.0 00:01:32.915 SO libspdk_accel_ioat.so.6.0 00:01:32.915 SYMLINK libspdk_scheduler_gscheduler.so 00:01:32.915 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:32.915 SYMLINK libspdk_keyring_file.so 00:01:32.915 SO libspdk_accel_iaa.so.3.0 00:01:32.915 LIB libspdk_accel_dsa.a 00:01:32.915 LIB libspdk_blob_bdev.a 00:01:32.915 SYMLINK libspdk_accel_error.so 00:01:32.915 SYMLINK libspdk_scheduler_dynamic.so 00:01:32.915 SO libspdk_accel_dsa.so.5.0 00:01:32.915 SYMLINK libspdk_accel_ioat.so 00:01:33.173 SO libspdk_blob_bdev.so.11.0 00:01:33.173 SYMLINK libspdk_accel_iaa.so 00:01:33.173 SYMLINK libspdk_accel_dsa.so 00:01:33.173 SYMLINK libspdk_blob_bdev.so 00:01:33.433 LIB libspdk_vfu_device.a 00:01:33.433 SO libspdk_vfu_device.so.3.0 00:01:33.433 CC module/bdev/delay/vbdev_delay.o 00:01:33.433 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:33.433 CC module/bdev/nvme/bdev_nvme.o 00:01:33.433 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:33.433 CC module/bdev/ftl/bdev_ftl.o 00:01:33.433 CC module/bdev/gpt/gpt.o 00:01:33.433 CC module/bdev/nvme/nvme_rpc.o 00:01:33.433 CC module/bdev/error/vbdev_error.o 00:01:33.433 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:33.433 CC module/bdev/nvme/bdev_mdns_client.o 00:01:33.433 CC module/bdev/error/vbdev_error_rpc.o 00:01:33.433 CC module/bdev/lvol/vbdev_lvol.o 00:01:33.433 CC module/bdev/iscsi/bdev_iscsi.o 00:01:33.433 CC module/bdev/gpt/vbdev_gpt.o 00:01:33.433 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:33.433 CC module/bdev/raid/bdev_raid.o 00:01:33.433 CC module/bdev/split/vbdev_split.o 00:01:33.433 CC module/bdev/nvme/vbdev_opal.o 00:01:33.433 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:33.433 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:33.433 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:33.433 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:33.433 CC module/bdev/split/vbdev_split_rpc.o 00:01:33.433 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:33.433 CC module/bdev/raid/bdev_raid_rpc.o 00:01:33.433 CC module/bdev/malloc/bdev_malloc.o 00:01:33.433 CC module/bdev/aio/bdev_aio.o 00:01:33.433 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:33.433 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:33.433 CC module/bdev/passthru/vbdev_passthru.o 00:01:33.433 CC module/blobfs/bdev/blobfs_bdev.o 00:01:33.433 CC module/bdev/raid/bdev_raid_sb.o 00:01:33.433 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:33.433 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:33.433 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:33.433 CC module/bdev/aio/bdev_aio_rpc.o 00:01:33.433 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:33.433 CC module/bdev/raid/raid0.o 00:01:33.433 CC module/bdev/raid/raid1.o 00:01:33.433 CC module/bdev/raid/concat.o 00:01:33.433 CC module/bdev/null/bdev_null.o 00:01:33.433 CC module/bdev/null/bdev_null_rpc.o 00:01:33.433 SYMLINK libspdk_vfu_device.so 00:01:33.692 LIB libspdk_sock_posix.a 00:01:33.692 SO libspdk_sock_posix.so.6.0 00:01:33.692 LIB libspdk_bdev_gpt.a 00:01:33.692 LIB libspdk_blobfs_bdev.a 00:01:33.692 LIB libspdk_bdev_split.a 00:01:33.692 SO libspdk_bdev_gpt.so.6.0 00:01:33.692 SO libspdk_blobfs_bdev.so.6.0 00:01:33.692 SO libspdk_bdev_split.so.6.0 00:01:33.692 LIB libspdk_bdev_ftl.a 00:01:33.692 SYMLINK libspdk_sock_posix.so 00:01:33.692 SO libspdk_bdev_ftl.so.6.0 00:01:33.692 SYMLINK libspdk_bdev_gpt.so 00:01:33.951 SYMLINK libspdk_blobfs_bdev.so 00:01:33.951 SYMLINK libspdk_bdev_split.so 00:01:33.951 LIB libspdk_bdev_null.a 00:01:33.951 LIB libspdk_bdev_error.a 00:01:33.951 LIB libspdk_bdev_passthru.a 00:01:33.951 SYMLINK libspdk_bdev_ftl.so 00:01:33.951 SO libspdk_bdev_null.so.6.0 00:01:33.951 LIB libspdk_bdev_zone_block.a 00:01:33.951 SO libspdk_bdev_error.so.6.0 00:01:33.951 SO libspdk_bdev_passthru.so.6.0 00:01:33.951 LIB libspdk_bdev_iscsi.a 00:01:33.951 LIB libspdk_bdev_aio.a 00:01:33.951 SO libspdk_bdev_zone_block.so.6.0 00:01:33.951 LIB libspdk_bdev_malloc.a 00:01:33.951 SO libspdk_bdev_iscsi.so.6.0 00:01:33.951 SYMLINK libspdk_bdev_null.so 00:01:33.951 SO libspdk_bdev_aio.so.6.0 00:01:33.951 SYMLINK libspdk_bdev_error.so 00:01:33.951 SYMLINK libspdk_bdev_passthru.so 00:01:33.951 LIB libspdk_bdev_delay.a 00:01:33.951 SO libspdk_bdev_malloc.so.6.0 00:01:33.951 SYMLINK libspdk_bdev_zone_block.so 00:01:33.951 SO libspdk_bdev_delay.so.6.0 00:01:33.951 SYMLINK libspdk_bdev_iscsi.so 00:01:33.951 SYMLINK libspdk_bdev_aio.so 00:01:33.951 SYMLINK libspdk_bdev_malloc.so 00:01:33.951 SYMLINK libspdk_bdev_delay.so 00:01:33.951 LIB libspdk_bdev_lvol.a 00:01:34.209 LIB libspdk_bdev_virtio.a 00:01:34.209 SO libspdk_bdev_lvol.so.6.0 00:01:34.209 SO libspdk_bdev_virtio.so.6.0 00:01:34.209 SYMLINK libspdk_bdev_lvol.so 00:01:34.209 SYMLINK libspdk_bdev_virtio.so 00:01:34.467 LIB libspdk_bdev_raid.a 00:01:34.467 SO libspdk_bdev_raid.so.6.0 00:01:34.726 SYMLINK libspdk_bdev_raid.so 00:01:35.662 LIB libspdk_bdev_nvme.a 00:01:35.662 SO libspdk_bdev_nvme.so.7.0 00:01:35.920 SYMLINK libspdk_bdev_nvme.so 00:01:36.178 CC module/event/subsystems/sock/sock.o 00:01:36.178 CC module/event/subsystems/iobuf/iobuf.o 00:01:36.178 CC module/event/subsystems/vmd/vmd.o 00:01:36.178 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:36.178 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:36.178 CC module/event/subsystems/scheduler/scheduler.o 00:01:36.178 CC module/event/subsystems/keyring/keyring.o 00:01:36.178 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:36.178 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:36.436 LIB libspdk_event_sock.a 00:01:36.436 LIB libspdk_event_vhost_blk.a 00:01:36.436 LIB libspdk_event_keyring.a 00:01:36.436 LIB libspdk_event_scheduler.a 00:01:36.436 LIB libspdk_event_vmd.a 00:01:36.436 LIB libspdk_event_vfu_tgt.a 00:01:36.436 SO libspdk_event_keyring.so.1.0 00:01:36.436 SO libspdk_event_sock.so.5.0 00:01:36.436 SO libspdk_event_vhost_blk.so.3.0 00:01:36.436 SO libspdk_event_scheduler.so.4.0 00:01:36.436 LIB libspdk_event_iobuf.a 00:01:36.436 SO libspdk_event_vmd.so.6.0 00:01:36.436 SO libspdk_event_vfu_tgt.so.3.0 00:01:36.436 SO libspdk_event_iobuf.so.3.0 00:01:36.436 SYMLINK libspdk_event_sock.so 00:01:36.436 SYMLINK libspdk_event_keyring.so 00:01:36.436 SYMLINK libspdk_event_vhost_blk.so 00:01:36.436 SYMLINK libspdk_event_scheduler.so 00:01:36.436 SYMLINK libspdk_event_vfu_tgt.so 00:01:36.436 SYMLINK libspdk_event_vmd.so 00:01:36.436 SYMLINK libspdk_event_iobuf.so 00:01:36.695 CC module/event/subsystems/accel/accel.o 00:01:36.695 LIB libspdk_event_accel.a 00:01:36.695 SO libspdk_event_accel.so.6.0 00:01:36.695 SYMLINK libspdk_event_accel.so 00:01:36.953 CC module/event/subsystems/bdev/bdev.o 00:01:37.211 LIB libspdk_event_bdev.a 00:01:37.211 SO libspdk_event_bdev.so.6.0 00:01:37.211 SYMLINK libspdk_event_bdev.so 00:01:37.469 CC module/event/subsystems/scsi/scsi.o 00:01:37.469 CC module/event/subsystems/ublk/ublk.o 00:01:37.469 CC module/event/subsystems/nbd/nbd.o 00:01:37.469 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:37.469 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:37.469 LIB libspdk_event_nbd.a 00:01:37.469 LIB libspdk_event_ublk.a 00:01:37.469 LIB libspdk_event_scsi.a 00:01:37.469 SO libspdk_event_ublk.so.3.0 00:01:37.469 SO libspdk_event_nbd.so.6.0 00:01:37.469 SO libspdk_event_scsi.so.6.0 00:01:37.727 SYMLINK libspdk_event_nbd.so 00:01:37.727 SYMLINK libspdk_event_ublk.so 00:01:37.727 LIB libspdk_event_nvmf.a 00:01:37.727 SYMLINK libspdk_event_scsi.so 00:01:37.727 SO libspdk_event_nvmf.so.6.0 00:01:37.727 SYMLINK libspdk_event_nvmf.so 00:01:37.727 CC module/event/subsystems/iscsi/iscsi.o 00:01:37.727 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:37.985 LIB libspdk_event_vhost_scsi.a 00:01:37.985 LIB libspdk_event_iscsi.a 00:01:37.985 SO libspdk_event_vhost_scsi.so.3.0 00:01:37.985 SO libspdk_event_iscsi.so.6.0 00:01:37.985 SYMLINK libspdk_event_vhost_scsi.so 00:01:37.985 SYMLINK libspdk_event_iscsi.so 00:01:38.244 SO libspdk.so.6.0 00:01:38.244 SYMLINK libspdk.so 00:01:38.508 CXX app/trace/trace.o 00:01:38.508 CC app/spdk_nvme_identify/identify.o 00:01:38.508 CC app/spdk_nvme_discover/discovery_aer.o 00:01:38.508 CC app/spdk_top/spdk_top.o 00:01:38.508 CC app/spdk_nvme_perf/perf.o 00:01:38.508 TEST_HEADER include/spdk/accel.h 00:01:38.508 TEST_HEADER include/spdk/accel_module.h 00:01:38.508 CC test/rpc_client/rpc_client_test.o 00:01:38.508 CC app/trace_record/trace_record.o 00:01:38.508 CC app/spdk_lspci/spdk_lspci.o 00:01:38.508 TEST_HEADER include/spdk/assert.h 00:01:38.508 TEST_HEADER include/spdk/barrier.h 00:01:38.508 TEST_HEADER include/spdk/base64.h 00:01:38.508 TEST_HEADER include/spdk/bdev.h 00:01:38.508 TEST_HEADER include/spdk/bdev_module.h 00:01:38.508 TEST_HEADER include/spdk/bdev_zone.h 00:01:38.508 TEST_HEADER include/spdk/bit_array.h 00:01:38.508 TEST_HEADER include/spdk/bit_pool.h 00:01:38.508 TEST_HEADER include/spdk/blob_bdev.h 00:01:38.508 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:38.508 TEST_HEADER include/spdk/blobfs.h 00:01:38.508 TEST_HEADER include/spdk/blob.h 00:01:38.508 TEST_HEADER include/spdk/conf.h 00:01:38.508 TEST_HEADER include/spdk/config.h 00:01:38.508 TEST_HEADER include/spdk/cpuset.h 00:01:38.509 TEST_HEADER include/spdk/crc16.h 00:01:38.509 TEST_HEADER include/spdk/crc32.h 00:01:38.509 TEST_HEADER include/spdk/crc64.h 00:01:38.509 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:38.509 TEST_HEADER include/spdk/dif.h 00:01:38.509 TEST_HEADER include/spdk/dma.h 00:01:38.509 CC app/nvmf_tgt/nvmf_main.o 00:01:38.509 CC app/spdk_dd/spdk_dd.o 00:01:38.509 TEST_HEADER include/spdk/endian.h 00:01:38.509 TEST_HEADER include/spdk/env_dpdk.h 00:01:38.509 TEST_HEADER include/spdk/env.h 00:01:38.509 TEST_HEADER include/spdk/event.h 00:01:38.509 CC app/vhost/vhost.o 00:01:38.509 TEST_HEADER include/spdk/fd_group.h 00:01:38.509 TEST_HEADER include/spdk/fd.h 00:01:38.509 TEST_HEADER include/spdk/file.h 00:01:38.509 CC app/iscsi_tgt/iscsi_tgt.o 00:01:38.509 TEST_HEADER include/spdk/ftl.h 00:01:38.509 TEST_HEADER include/spdk/gpt_spec.h 00:01:38.509 TEST_HEADER include/spdk/hexlify.h 00:01:38.509 TEST_HEADER include/spdk/histogram_data.h 00:01:38.509 TEST_HEADER include/spdk/idxd.h 00:01:38.509 TEST_HEADER include/spdk/idxd_spec.h 00:01:38.509 CC test/app/histogram_perf/histogram_perf.o 00:01:38.509 TEST_HEADER include/spdk/init.h 00:01:38.509 CC test/app/jsoncat/jsoncat.o 00:01:38.509 TEST_HEADER include/spdk/ioat.h 00:01:38.509 CC test/app/stub/stub.o 00:01:38.509 CC examples/ioat/perf/perf.o 00:01:38.509 CC app/spdk_tgt/spdk_tgt.o 00:01:38.509 TEST_HEADER include/spdk/ioat_spec.h 00:01:38.509 TEST_HEADER include/spdk/iscsi_spec.h 00:01:38.509 CC test/event/event_perf/event_perf.o 00:01:38.509 CC examples/accel/perf/accel_perf.o 00:01:38.509 CC examples/nvme/hotplug/hotplug.o 00:01:38.509 CC examples/vmd/led/led.o 00:01:38.509 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:38.509 CC examples/nvme/reconnect/reconnect.o 00:01:38.509 CC examples/vmd/lsvmd/lsvmd.o 00:01:38.509 CC examples/nvme/hello_world/hello_world.o 00:01:38.509 CC examples/sock/hello_world/hello_sock.o 00:01:38.509 CC test/thread/poller_perf/poller_perf.o 00:01:38.509 TEST_HEADER include/spdk/json.h 00:01:38.509 TEST_HEADER include/spdk/jsonrpc.h 00:01:38.509 CC examples/nvme/arbitration/arbitration.o 00:01:38.509 TEST_HEADER include/spdk/keyring.h 00:01:38.509 TEST_HEADER include/spdk/keyring_module.h 00:01:38.509 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:38.509 TEST_HEADER include/spdk/likely.h 00:01:38.509 CC test/nvme/aer/aer.o 00:01:38.509 CC examples/idxd/perf/perf.o 00:01:38.509 CC app/fio/nvme/fio_plugin.o 00:01:38.509 TEST_HEADER include/spdk/log.h 00:01:38.509 TEST_HEADER include/spdk/lvol.h 00:01:38.509 CC examples/util/zipf/zipf.o 00:01:38.509 TEST_HEADER include/spdk/memory.h 00:01:38.509 TEST_HEADER include/spdk/mmio.h 00:01:38.509 TEST_HEADER include/spdk/nbd.h 00:01:38.509 TEST_HEADER include/spdk/notify.h 00:01:38.509 TEST_HEADER include/spdk/nvme.h 00:01:38.509 TEST_HEADER include/spdk/nvme_intel.h 00:01:38.509 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:38.509 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:38.509 TEST_HEADER include/spdk/nvme_spec.h 00:01:38.509 TEST_HEADER include/spdk/nvme_zns.h 00:01:38.509 CC test/accel/dif/dif.o 00:01:38.509 CC test/blobfs/mkfs/mkfs.o 00:01:38.509 CC examples/blob/hello_world/hello_blob.o 00:01:38.509 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:38.509 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:38.509 TEST_HEADER include/spdk/nvmf.h 00:01:38.509 CC test/bdev/bdevio/bdevio.o 00:01:38.509 TEST_HEADER include/spdk/nvmf_spec.h 00:01:38.509 CC examples/bdev/hello_world/hello_bdev.o 00:01:38.509 TEST_HEADER include/spdk/nvmf_transport.h 00:01:38.509 CC examples/bdev/bdevperf/bdevperf.o 00:01:38.509 CC test/dma/test_dma/test_dma.o 00:01:38.509 TEST_HEADER include/spdk/opal.h 00:01:38.509 CC examples/thread/thread/thread_ex.o 00:01:38.509 TEST_HEADER include/spdk/opal_spec.h 00:01:38.771 TEST_HEADER include/spdk/pci_ids.h 00:01:38.771 CC test/app/bdev_svc/bdev_svc.o 00:01:38.771 TEST_HEADER include/spdk/pipe.h 00:01:38.771 CC examples/nvmf/nvmf/nvmf.o 00:01:38.771 TEST_HEADER include/spdk/queue.h 00:01:38.771 TEST_HEADER include/spdk/reduce.h 00:01:38.771 TEST_HEADER include/spdk/rpc.h 00:01:38.771 TEST_HEADER include/spdk/scheduler.h 00:01:38.771 TEST_HEADER include/spdk/scsi.h 00:01:38.771 TEST_HEADER include/spdk/scsi_spec.h 00:01:38.771 TEST_HEADER include/spdk/sock.h 00:01:38.771 TEST_HEADER include/spdk/stdinc.h 00:01:38.771 TEST_HEADER include/spdk/string.h 00:01:38.771 TEST_HEADER include/spdk/thread.h 00:01:38.771 CC test/lvol/esnap/esnap.o 00:01:38.771 TEST_HEADER include/spdk/trace.h 00:01:38.771 TEST_HEADER include/spdk/trace_parser.h 00:01:38.771 CC test/env/mem_callbacks/mem_callbacks.o 00:01:38.771 TEST_HEADER include/spdk/tree.h 00:01:38.771 TEST_HEADER include/spdk/ublk.h 00:01:38.771 LINK spdk_lspci 00:01:38.771 TEST_HEADER include/spdk/util.h 00:01:38.771 TEST_HEADER include/spdk/uuid.h 00:01:38.771 TEST_HEADER include/spdk/version.h 00:01:38.771 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:38.771 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:38.771 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:38.771 TEST_HEADER include/spdk/vhost.h 00:01:38.771 TEST_HEADER include/spdk/vmd.h 00:01:38.771 TEST_HEADER include/spdk/xor.h 00:01:38.771 TEST_HEADER include/spdk/zipf.h 00:01:38.771 CXX test/cpp_headers/accel.o 00:01:38.771 LINK rpc_client_test 00:01:38.771 LINK spdk_nvme_discover 00:01:38.771 LINK jsoncat 00:01:38.771 LINK lsvmd 00:01:38.771 LINK interrupt_tgt 00:01:38.771 LINK histogram_perf 00:01:38.771 LINK poller_perf 00:01:38.771 LINK led 00:01:38.771 LINK event_perf 00:01:38.771 LINK nvmf_tgt 00:01:38.771 LINK vhost 00:01:38.772 LINK zipf 00:01:38.772 LINK spdk_trace_record 00:01:39.036 LINK stub 00:01:39.036 LINK iscsi_tgt 00:01:39.036 LINK cmb_copy 00:01:39.036 LINK ioat_perf 00:01:39.036 LINK spdk_tgt 00:01:39.036 LINK hello_world 00:01:39.036 LINK hotplug 00:01:39.036 LINK bdev_svc 00:01:39.036 CXX test/cpp_headers/accel_module.o 00:01:39.036 LINK mkfs 00:01:39.036 LINK hello_sock 00:01:39.036 LINK hello_blob 00:01:39.036 LINK hello_bdev 00:01:39.036 LINK aer 00:01:39.303 LINK thread 00:01:39.303 LINK spdk_dd 00:01:39.303 LINK arbitration 00:01:39.303 LINK idxd_perf 00:01:39.303 LINK nvmf 00:01:39.303 LINK reconnect 00:01:39.303 LINK spdk_trace 00:01:39.303 CC examples/ioat/verify/verify.o 00:01:39.304 CXX test/cpp_headers/assert.o 00:01:39.304 CXX test/cpp_headers/barrier.o 00:01:39.304 CXX test/cpp_headers/base64.o 00:01:39.304 CC test/env/vtophys/vtophys.o 00:01:39.304 CC examples/nvme/abort/abort.o 00:01:39.304 CC test/event/reactor_perf/reactor_perf.o 00:01:39.304 CC test/event/reactor/reactor.o 00:01:39.304 CC test/nvme/reset/reset.o 00:01:39.304 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:39.304 LINK dif 00:01:39.304 CC test/nvme/sgl/sgl.o 00:01:39.304 CXX test/cpp_headers/bdev.o 00:01:39.304 CC test/nvme/e2edp/nvme_dp.o 00:01:39.304 LINK test_dma 00:01:39.304 LINK bdevio 00:01:39.304 CC app/fio/bdev/fio_plugin.o 00:01:39.304 CC test/nvme/overhead/overhead.o 00:01:39.304 LINK accel_perf 00:01:39.304 CC examples/blob/cli/blobcli.o 00:01:39.304 CC test/event/app_repeat/app_repeat.o 00:01:39.569 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:39.569 CC test/nvme/err_injection/err_injection.o 00:01:39.569 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:39.569 LINK nvme_manage 00:01:39.569 CXX test/cpp_headers/bdev_module.o 00:01:39.569 CXX test/cpp_headers/bdev_zone.o 00:01:39.569 CC test/env/memory/memory_ut.o 00:01:39.569 LINK nvme_fuzz 00:01:39.569 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:39.569 CXX test/cpp_headers/bit_array.o 00:01:39.569 CXX test/cpp_headers/bit_pool.o 00:01:39.569 CC test/env/pci/pci_ut.o 00:01:39.569 CC test/nvme/startup/startup.o 00:01:39.569 CC test/event/scheduler/scheduler.o 00:01:39.569 LINK vtophys 00:01:39.569 CXX test/cpp_headers/blob_bdev.o 00:01:39.569 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:39.569 LINK reactor 00:01:39.569 LINK reactor_perf 00:01:39.569 CXX test/cpp_headers/blobfs_bdev.o 00:01:39.569 LINK spdk_nvme 00:01:39.569 CC test/nvme/reserve/reserve.o 00:01:39.831 LINK verify 00:01:39.831 LINK app_repeat 00:01:39.831 CC test/nvme/simple_copy/simple_copy.o 00:01:39.831 CXX test/cpp_headers/blobfs.o 00:01:39.831 CC test/nvme/connect_stress/connect_stress.o 00:01:39.831 CXX test/cpp_headers/blob.o 00:01:39.831 CXX test/cpp_headers/conf.o 00:01:39.831 CC test/nvme/boot_partition/boot_partition.o 00:01:39.831 LINK env_dpdk_post_init 00:01:39.831 LINK reset 00:01:39.831 CXX test/cpp_headers/config.o 00:01:39.831 CC test/nvme/fused_ordering/fused_ordering.o 00:01:39.831 CC test/nvme/compliance/nvme_compliance.o 00:01:39.831 CXX test/cpp_headers/cpuset.o 00:01:39.831 CXX test/cpp_headers/crc16.o 00:01:39.831 LINK err_injection 00:01:39.831 CXX test/cpp_headers/crc32.o 00:01:39.831 LINK sgl 00:01:39.831 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:39.831 LINK nvme_dp 00:01:39.831 LINK mem_callbacks 00:01:39.831 CXX test/cpp_headers/crc64.o 00:01:39.831 CXX test/cpp_headers/dif.o 00:01:39.831 CXX test/cpp_headers/dma.o 00:01:39.831 CXX test/cpp_headers/endian.o 00:01:39.831 CXX test/cpp_headers/env_dpdk.o 00:01:39.831 CC test/nvme/fdp/fdp.o 00:01:39.831 CXX test/cpp_headers/env.o 00:01:39.831 LINK overhead 00:01:39.831 LINK startup 00:01:39.831 CXX test/cpp_headers/event.o 00:01:40.104 LINK pmr_persistence 00:01:40.104 LINK spdk_nvme_perf 00:01:40.104 LINK spdk_nvme_identify 00:01:40.104 CC test/nvme/cuse/cuse.o 00:01:40.104 CXX test/cpp_headers/fd_group.o 00:01:40.104 CXX test/cpp_headers/fd.o 00:01:40.104 LINK abort 00:01:40.104 LINK spdk_top 00:01:40.104 LINK scheduler 00:01:40.104 CXX test/cpp_headers/file.o 00:01:40.104 CXX test/cpp_headers/ftl.o 00:01:40.104 LINK bdevperf 00:01:40.104 LINK reserve 00:01:40.104 CXX test/cpp_headers/gpt_spec.o 00:01:40.104 CXX test/cpp_headers/hexlify.o 00:01:40.104 CXX test/cpp_headers/histogram_data.o 00:01:40.104 CXX test/cpp_headers/idxd.o 00:01:40.104 LINK boot_partition 00:01:40.104 LINK connect_stress 00:01:40.104 CXX test/cpp_headers/idxd_spec.o 00:01:40.104 CXX test/cpp_headers/init.o 00:01:40.104 CXX test/cpp_headers/ioat.o 00:01:40.104 CXX test/cpp_headers/ioat_spec.o 00:01:40.104 LINK simple_copy 00:01:40.104 CXX test/cpp_headers/iscsi_spec.o 00:01:40.104 CXX test/cpp_headers/json.o 00:01:40.375 CXX test/cpp_headers/jsonrpc.o 00:01:40.375 CXX test/cpp_headers/keyring.o 00:01:40.375 LINK fused_ordering 00:01:40.375 CXX test/cpp_headers/keyring_module.o 00:01:40.375 CXX test/cpp_headers/likely.o 00:01:40.375 CXX test/cpp_headers/log.o 00:01:40.375 LINK doorbell_aers 00:01:40.375 LINK vhost_fuzz 00:01:40.375 CXX test/cpp_headers/lvol.o 00:01:40.375 CXX test/cpp_headers/memory.o 00:01:40.375 CXX test/cpp_headers/mmio.o 00:01:40.375 LINK spdk_bdev 00:01:40.375 CXX test/cpp_headers/nbd.o 00:01:40.375 CXX test/cpp_headers/notify.o 00:01:40.375 CXX test/cpp_headers/nvme.o 00:01:40.375 LINK pci_ut 00:01:40.375 CXX test/cpp_headers/nvme_intel.o 00:01:40.375 CXX test/cpp_headers/nvme_ocssd.o 00:01:40.375 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:40.375 CXX test/cpp_headers/nvme_spec.o 00:01:40.375 LINK blobcli 00:01:40.375 CXX test/cpp_headers/nvme_zns.o 00:01:40.375 CXX test/cpp_headers/nvmf_cmd.o 00:01:40.375 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:40.375 CXX test/cpp_headers/nvmf.o 00:01:40.375 CXX test/cpp_headers/nvmf_spec.o 00:01:40.375 CXX test/cpp_headers/nvmf_transport.o 00:01:40.375 CXX test/cpp_headers/opal.o 00:01:40.375 CXX test/cpp_headers/opal_spec.o 00:01:40.375 CXX test/cpp_headers/pci_ids.o 00:01:40.375 LINK nvme_compliance 00:01:40.637 CXX test/cpp_headers/pipe.o 00:01:40.637 CXX test/cpp_headers/queue.o 00:01:40.637 CXX test/cpp_headers/reduce.o 00:01:40.637 CXX test/cpp_headers/rpc.o 00:01:40.637 CXX test/cpp_headers/scheduler.o 00:01:40.637 CXX test/cpp_headers/scsi.o 00:01:40.637 CXX test/cpp_headers/scsi_spec.o 00:01:40.637 CXX test/cpp_headers/sock.o 00:01:40.637 CXX test/cpp_headers/stdinc.o 00:01:40.637 LINK fdp 00:01:40.637 CXX test/cpp_headers/thread.o 00:01:40.637 CXX test/cpp_headers/string.o 00:01:40.637 CXX test/cpp_headers/trace.o 00:01:40.637 CXX test/cpp_headers/trace_parser.o 00:01:40.637 CXX test/cpp_headers/tree.o 00:01:40.637 CXX test/cpp_headers/ublk.o 00:01:40.637 CXX test/cpp_headers/util.o 00:01:40.637 CXX test/cpp_headers/uuid.o 00:01:40.637 CXX test/cpp_headers/version.o 00:01:40.637 CXX test/cpp_headers/vfio_user_pci.o 00:01:40.637 CXX test/cpp_headers/vfio_user_spec.o 00:01:40.637 CXX test/cpp_headers/vhost.o 00:01:40.637 CXX test/cpp_headers/vmd.o 00:01:40.637 CXX test/cpp_headers/xor.o 00:01:40.637 CXX test/cpp_headers/zipf.o 00:01:41.210 LINK memory_ut 00:01:41.474 LINK cuse 00:01:41.732 LINK iscsi_fuzz 00:01:44.260 LINK esnap 00:01:44.519 00:01:44.519 real 0m47.732s 00:01:44.519 user 10m6.828s 00:01:44.519 sys 2m30.254s 00:01:44.519 12:29:43 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:44.519 12:29:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.519 ************************************ 00:01:44.519 END TEST make 00:01:44.519 ************************************ 00:01:44.519 12:29:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:44.519 12:29:43 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:44.519 12:29:43 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:44.519 12:29:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.519 12:29:43 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:44.519 12:29:43 -- pm/common@45 -- $ pid=961911 00:01:44.519 12:29:43 -- pm/common@52 -- $ sudo kill -TERM 961911 00:01:44.519 12:29:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.519 12:29:43 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:44.519 12:29:43 -- pm/common@45 -- $ pid=961912 00:01:44.519 12:29:43 -- pm/common@52 -- $ sudo kill -TERM 961912 00:01:44.519 12:29:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.519 12:29:43 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:44.519 12:29:43 -- pm/common@45 -- $ pid=961914 00:01:44.519 12:29:43 -- pm/common@52 -- $ sudo kill -TERM 961914 00:01:44.519 12:29:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.519 12:29:43 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:44.519 12:29:43 -- pm/common@45 -- $ pid=961913 00:01:44.519 12:29:43 -- pm/common@52 -- $ sudo kill -TERM 961913 00:01:44.778 12:29:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:44.778 12:29:43 -- nvmf/common.sh@7 -- # uname -s 00:01:44.778 12:29:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:44.778 12:29:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:44.778 12:29:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:44.778 12:29:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:44.778 12:29:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:44.778 12:29:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:44.778 12:29:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:44.778 12:29:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:44.778 12:29:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:44.778 12:29:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:44.778 12:29:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:01:44.778 12:29:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:01:44.778 12:29:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:44.778 12:29:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:44.778 12:29:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:44.778 12:29:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:44.778 12:29:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:44.778 12:29:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:44.778 12:29:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.778 12:29:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.778 12:29:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.778 12:29:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.778 12:29:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.778 12:29:43 -- paths/export.sh@5 -- # export PATH 00:01:44.778 12:29:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.778 12:29:43 -- nvmf/common.sh@47 -- # : 0 00:01:44.778 12:29:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:44.778 12:29:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:44.778 12:29:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:44.778 12:29:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:44.778 12:29:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:44.778 12:29:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:44.778 12:29:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:44.778 12:29:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:44.778 12:29:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:44.778 12:29:43 -- spdk/autotest.sh@32 -- # uname -s 00:01:44.778 12:29:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:44.778 12:29:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:44.778 12:29:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.778 12:29:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:44.778 12:29:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:44.778 12:29:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:44.778 12:29:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:44.778 12:29:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:44.778 12:29:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1017089 00:01:44.778 12:29:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:44.778 12:29:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:44.778 12:29:43 -- pm/common@17 -- # local monitor 00:01:44.778 12:29:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.778 12:29:43 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1017090 00:01:44.778 12:29:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.778 12:29:43 -- pm/common@21 -- # date +%s 00:01:44.778 12:29:43 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1017093 00:01:44.778 12:29:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.778 12:29:43 -- pm/common@21 -- # date +%s 00:01:44.778 12:29:43 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1017097 00:01:44.778 12:29:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.778 12:29:43 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1017100 00:01:44.778 12:29:43 -- pm/common@21 -- # date +%s 00:01:44.778 12:29:43 -- pm/common@26 -- # sleep 1 00:01:44.778 12:29:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713263383 00:01:44.778 12:29:43 -- pm/common@21 -- # date +%s 00:01:44.778 12:29:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713263383 00:01:44.779 12:29:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713263383 00:01:44.779 12:29:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713263383 00:01:44.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713263383_collect-vmstat.pm.log 00:01:44.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713263383_collect-bmc-pm.bmc.pm.log 00:01:44.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713263383_collect-cpu-load.pm.log 00:01:44.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713263383_collect-cpu-temp.pm.log 00:01:45.713 12:29:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:45.713 12:29:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:45.713 12:29:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:45.713 12:29:44 -- common/autotest_common.sh@10 -- # set +x 00:01:45.713 12:29:44 -- spdk/autotest.sh@59 -- # create_test_list 00:01:45.713 12:29:44 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:45.713 12:29:44 -- common/autotest_common.sh@10 -- # set +x 00:01:45.713 12:29:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:45.713 12:29:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.713 12:29:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.713 12:29:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:45.713 12:29:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.713 12:29:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:45.713 12:29:44 -- common/autotest_common.sh@1441 -- # uname 00:01:45.713 12:29:44 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:45.713 12:29:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:45.713 12:29:44 -- common/autotest_common.sh@1461 -- # uname 00:01:45.713 12:29:44 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:45.713 12:29:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:45.713 12:29:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:45.713 12:29:44 -- spdk/autotest.sh@72 -- # hash lcov 00:01:45.713 12:29:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:45.713 12:29:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:45.713 --rc lcov_branch_coverage=1 00:01:45.713 --rc lcov_function_coverage=1 00:01:45.713 --rc genhtml_branch_coverage=1 00:01:45.713 --rc genhtml_function_coverage=1 00:01:45.713 --rc genhtml_legend=1 00:01:45.713 --rc geninfo_all_blocks=1 00:01:45.713 ' 00:01:45.713 12:29:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:45.713 --rc lcov_branch_coverage=1 00:01:45.713 --rc lcov_function_coverage=1 00:01:45.713 --rc genhtml_branch_coverage=1 00:01:45.713 --rc genhtml_function_coverage=1 00:01:45.713 --rc genhtml_legend=1 00:01:45.713 --rc geninfo_all_blocks=1 00:01:45.713 ' 00:01:45.713 12:29:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:45.713 --rc lcov_branch_coverage=1 00:01:45.713 --rc lcov_function_coverage=1 00:01:45.713 --rc genhtml_branch_coverage=1 00:01:45.713 --rc genhtml_function_coverage=1 00:01:45.713 --rc genhtml_legend=1 00:01:45.713 --rc geninfo_all_blocks=1 00:01:45.713 --no-external' 00:01:45.713 12:29:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:45.713 --rc lcov_branch_coverage=1 00:01:45.713 --rc lcov_function_coverage=1 00:01:45.713 --rc genhtml_branch_coverage=1 00:01:45.713 --rc genhtml_function_coverage=1 00:01:45.713 --rc genhtml_legend=1 00:01:45.713 --rc geninfo_all_blocks=1 00:01:45.713 --no-external' 00:01:45.713 12:29:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:45.972 lcov: LCOV version 1.14 00:01:45.972 12:29:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:58.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:58.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:00.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:00.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:00.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:00.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:00.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:00.070 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:18.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:18.176 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:18.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:18.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:18.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:19.111 12:30:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:19.111 12:30:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:19.111 12:30:18 -- common/autotest_common.sh@10 -- # set +x 00:02:19.111 12:30:18 -- spdk/autotest.sh@91 -- # rm -f 00:02:19.111 12:30:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:20.487 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:02:20.487 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:20.487 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:20.487 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:20.487 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:20.487 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:20.487 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:20.487 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:20.487 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:20.745 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:20.745 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:20.745 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:20.745 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:20.745 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:20.745 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:20.745 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:20.745 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:20.745 12:30:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:20.745 12:30:19 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:20.745 12:30:19 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:20.745 12:30:19 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:20.745 12:30:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:20.745 12:30:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:20.745 12:30:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:20.745 12:30:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:20.745 12:30:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:20.745 12:30:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:20.745 12:30:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:20.745 12:30:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:20.745 12:30:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:20.745 12:30:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:20.745 12:30:19 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:20.745 No valid GPT data, bailing 00:02:20.745 12:30:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:20.745 12:30:19 -- scripts/common.sh@391 -- # pt= 00:02:20.745 12:30:19 -- scripts/common.sh@392 -- # return 1 00:02:20.745 12:30:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:20.745 1+0 records in 00:02:20.745 1+0 records out 00:02:20.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0020334 s, 516 MB/s 00:02:20.745 12:30:19 -- spdk/autotest.sh@118 -- # sync 00:02:20.745 12:30:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:20.745 12:30:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:20.745 12:30:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:23.283 12:30:21 -- spdk/autotest.sh@124 -- # uname -s 00:02:23.283 12:30:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:23.283 12:30:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.283 12:30:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.283 12:30:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.283 12:30:21 -- common/autotest_common.sh@10 -- # set +x 00:02:23.283 ************************************ 00:02:23.283 START TEST setup.sh 00:02:23.283 ************************************ 00:02:23.283 12:30:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.283 * Looking for test storage... 00:02:23.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:23.283 12:30:21 -- setup/test-setup.sh@10 -- # uname -s 00:02:23.283 12:30:21 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:23.283 12:30:21 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:23.283 12:30:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.283 12:30:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.283 12:30:21 -- common/autotest_common.sh@10 -- # set +x 00:02:23.283 ************************************ 00:02:23.283 START TEST acl 00:02:23.283 ************************************ 00:02:23.283 12:30:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:23.283 * Looking for test storage... 00:02:23.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:23.283 12:30:22 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:23.283 12:30:22 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:23.283 12:30:22 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:23.283 12:30:22 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:23.283 12:30:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:23.283 12:30:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:23.283 12:30:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:23.283 12:30:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.283 12:30:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:23.283 12:30:22 -- setup/acl.sh@12 -- # devs=() 00:02:23.283 12:30:22 -- setup/acl.sh@12 -- # declare -a devs 00:02:23.283 12:30:22 -- setup/acl.sh@13 -- # drivers=() 00:02:23.283 12:30:22 -- setup/acl.sh@13 -- # declare -A drivers 00:02:23.283 12:30:22 -- setup/acl.sh@51 -- # setup reset 00:02:23.283 12:30:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:23.283 12:30:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.183 12:30:23 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:25.183 12:30:23 -- setup/acl.sh@16 -- # local dev driver 00:02:25.183 12:30:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.183 12:30:23 -- setup/acl.sh@15 -- # setup output status 00:02:25.183 12:30:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.183 12:30:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:26.119 Hugepages 00:02:26.119 node hugesize free / total 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 00:02:26.119 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.119 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.119 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:26.119 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.378 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.378 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.378 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.378 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.378 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.378 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.378 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.378 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # continue 00:02:26.378 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.378 12:30:25 -- setup/acl.sh@19 -- # [[ 0000:81:00.0 == *:*:*.* ]] 00:02:26.378 12:30:25 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:26.378 12:30:25 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\1\:\0\0\.\0* ]] 00:02:26.378 12:30:25 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:26.378 12:30:25 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:26.378 12:30:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.378 12:30:25 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:26.378 12:30:25 -- setup/acl.sh@54 -- # run_test denied denied 00:02:26.378 12:30:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.378 12:30:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.378 12:30:25 -- common/autotest_common.sh@10 -- # set +x 00:02:26.378 ************************************ 00:02:26.378 START TEST denied 00:02:26.378 ************************************ 00:02:26.378 12:30:25 -- common/autotest_common.sh@1111 -- # denied 00:02:26.378 12:30:25 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:81:00.0' 00:02:26.378 12:30:25 -- setup/acl.sh@38 -- # setup output config 00:02:26.378 12:30:25 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:81:00.0' 00:02:26.378 12:30:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.378 12:30:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:28.278 0000:81:00.0 (8086 0a54): Skipping denied controller at 0000:81:00.0 00:02:28.278 12:30:26 -- setup/acl.sh@40 -- # verify 0000:81:00.0 00:02:28.278 12:30:26 -- setup/acl.sh@28 -- # local dev driver 00:02:28.278 12:30:26 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:28.278 12:30:26 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:81:00.0 ]] 00:02:28.278 12:30:26 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/driver 00:02:28.278 12:30:26 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:28.278 12:30:26 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:28.278 12:30:26 -- setup/acl.sh@41 -- # setup reset 00:02:28.278 12:30:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.278 12:30:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.811 00:02:30.811 real 0m4.300s 00:02:30.811 user 0m1.293s 00:02:30.811 sys 0m2.071s 00:02:30.811 12:30:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:30.811 12:30:29 -- common/autotest_common.sh@10 -- # set +x 00:02:30.811 ************************************ 00:02:30.811 END TEST denied 00:02:30.811 ************************************ 00:02:30.811 12:30:29 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:30.811 12:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:30.811 12:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:30.811 12:30:29 -- common/autotest_common.sh@10 -- # set +x 00:02:30.811 ************************************ 00:02:30.811 START TEST allowed 00:02:30.811 ************************************ 00:02:30.811 12:30:29 -- common/autotest_common.sh@1111 -- # allowed 00:02:30.811 12:30:29 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:81:00.0 00:02:30.811 12:30:29 -- setup/acl.sh@45 -- # setup output config 00:02:30.811 12:30:29 -- setup/acl.sh@46 -- # grep -E '0000:81:00.0 .*: nvme -> .*' 00:02:30.811 12:30:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.811 12:30:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:35.001 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:02:35.001 12:30:33 -- setup/acl.sh@47 -- # verify 00:02:35.001 12:30:33 -- setup/acl.sh@28 -- # local dev driver 00:02:35.001 12:30:33 -- setup/acl.sh@48 -- # setup reset 00:02:35.001 12:30:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:35.001 12:30:33 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.969 00:02:35.969 real 0m5.205s 00:02:35.969 user 0m1.243s 00:02:35.969 sys 0m1.880s 00:02:35.969 12:30:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:35.969 12:30:34 -- common/autotest_common.sh@10 -- # set +x 00:02:35.969 ************************************ 00:02:35.969 END TEST allowed 00:02:35.969 ************************************ 00:02:35.969 00:02:35.969 real 0m12.920s 00:02:35.969 user 0m3.860s 00:02:35.969 sys 0m6.110s 00:02:35.969 12:30:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:35.969 12:30:35 -- common/autotest_common.sh@10 -- # set +x 00:02:35.969 ************************************ 00:02:35.969 END TEST acl 00:02:35.969 ************************************ 00:02:35.969 12:30:35 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:35.969 12:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:35.969 12:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:35.969 12:30:35 -- common/autotest_common.sh@10 -- # set +x 00:02:36.228 ************************************ 00:02:36.228 START TEST hugepages 00:02:36.228 ************************************ 00:02:36.228 12:30:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:36.228 * Looking for test storage... 00:02:36.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:36.228 12:30:35 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:36.228 12:30:35 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:36.228 12:30:35 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:36.228 12:30:35 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:36.228 12:30:35 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:36.228 12:30:35 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:36.228 12:30:35 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:36.228 12:30:35 -- setup/common.sh@18 -- # local node= 00:02:36.228 12:30:35 -- setup/common.sh@19 -- # local var val 00:02:36.228 12:30:35 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.228 12:30:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.229 12:30:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.229 12:30:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.229 12:30:35 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.229 12:30:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35062232 kB' 'MemAvailable: 40195784 kB' 'Buffers: 3748 kB' 'Cached: 18530548 kB' 'SwapCached: 0 kB' 'Active: 14403840 kB' 'Inactive: 4650840 kB' 'Active(anon): 13755292 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523708 kB' 'Mapped: 207556 kB' 'Shmem: 13234908 kB' 'KReclaimable: 478816 kB' 'Slab: 889356 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410540 kB' 'KernelStack: 12992 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14922808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199880 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.229 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.229 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # continue 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.230 12:30:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.230 12:30:35 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:36.230 12:30:35 -- setup/common.sh@33 -- # echo 2048 00:02:36.230 12:30:35 -- setup/common.sh@33 -- # return 0 00:02:36.230 12:30:35 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:36.230 12:30:35 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:36.230 12:30:35 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:36.230 12:30:35 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:36.230 12:30:35 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:36.230 12:30:35 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:36.230 12:30:35 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:36.230 12:30:35 -- setup/hugepages.sh@207 -- # get_nodes 00:02:36.230 12:30:35 -- setup/hugepages.sh@27 -- # local node 00:02:36.230 12:30:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.230 12:30:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:36.230 12:30:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.230 12:30:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:36.230 12:30:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.230 12:30:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.231 12:30:35 -- setup/hugepages.sh@208 -- # clear_hp 00:02:36.231 12:30:35 -- setup/hugepages.sh@37 -- # local node hp 00:02:36.231 12:30:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:36.231 12:30:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:36.231 12:30:35 -- setup/hugepages.sh@41 -- # echo 0 00:02:36.231 12:30:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:36.231 12:30:35 -- setup/hugepages.sh@41 -- # echo 0 00:02:36.231 12:30:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:36.231 12:30:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:36.231 12:30:35 -- setup/hugepages.sh@41 -- # echo 0 00:02:36.231 12:30:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:36.231 12:30:35 -- setup/hugepages.sh@41 -- # echo 0 00:02:36.231 12:30:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:36.231 12:30:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:36.231 12:30:35 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:36.231 12:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:36.231 12:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:36.231 12:30:35 -- common/autotest_common.sh@10 -- # set +x 00:02:36.490 ************************************ 00:02:36.490 START TEST default_setup 00:02:36.490 ************************************ 00:02:36.490 12:30:35 -- common/autotest_common.sh@1111 -- # default_setup 00:02:36.490 12:30:35 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:36.490 12:30:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:36.490 12:30:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:36.490 12:30:35 -- setup/hugepages.sh@51 -- # shift 00:02:36.490 12:30:35 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:36.490 12:30:35 -- setup/hugepages.sh@52 -- # local node_ids 00:02:36.490 12:30:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:36.490 12:30:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:36.490 12:30:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:36.490 12:30:35 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:36.490 12:30:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:36.490 12:30:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:36.490 12:30:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:36.490 12:30:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:36.490 12:30:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:36.490 12:30:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:36.490 12:30:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:36.490 12:30:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:36.490 12:30:35 -- setup/hugepages.sh@73 -- # return 0 00:02:36.490 12:30:35 -- setup/hugepages.sh@137 -- # setup output 00:02:36.490 12:30:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.490 12:30:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.865 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:37.865 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:37.865 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:37.865 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:37.865 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:37.865 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:37.865 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:37.865 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:37.865 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:37.865 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:37.865 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:37.865 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:37.865 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:37.865 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:37.865 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:37.865 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:39.771 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:02:39.771 12:30:38 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:39.771 12:30:38 -- setup/hugepages.sh@89 -- # local node 00:02:39.771 12:30:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.771 12:30:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.771 12:30:38 -- setup/hugepages.sh@92 -- # local surp 00:02:39.771 12:30:38 -- setup/hugepages.sh@93 -- # local resv 00:02:39.771 12:30:38 -- setup/hugepages.sh@94 -- # local anon 00:02:39.771 12:30:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.771 12:30:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.771 12:30:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.771 12:30:38 -- setup/common.sh@18 -- # local node= 00:02:39.771 12:30:38 -- setup/common.sh@19 -- # local var val 00:02:39.771 12:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.771 12:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.771 12:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.771 12:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.771 12:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.771 12:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 12:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37170896 kB' 'MemAvailable: 42304448 kB' 'Buffers: 3748 kB' 'Cached: 18530648 kB' 'SwapCached: 0 kB' 'Active: 14423136 kB' 'Inactive: 4650840 kB' 'Active(anon): 13774588 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542888 kB' 'Mapped: 207688 kB' 'Shmem: 13235008 kB' 'KReclaimable: 478816 kB' 'Slab: 888840 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410024 kB' 'KernelStack: 12880 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14943240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.771 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.771 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.772 12:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.772 12:30:38 -- setup/common.sh@33 -- # echo 0 00:02:39.772 12:30:38 -- setup/common.sh@33 -- # return 0 00:02:39.772 12:30:38 -- setup/hugepages.sh@97 -- # anon=0 00:02:39.772 12:30:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:39.772 12:30:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.772 12:30:38 -- setup/common.sh@18 -- # local node= 00:02:39.772 12:30:38 -- setup/common.sh@19 -- # local var val 00:02:39.772 12:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.772 12:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.772 12:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.772 12:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.772 12:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.772 12:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.772 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37173640 kB' 'MemAvailable: 42307192 kB' 'Buffers: 3748 kB' 'Cached: 18530652 kB' 'SwapCached: 0 kB' 'Active: 14423460 kB' 'Inactive: 4650840 kB' 'Active(anon): 13774912 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543248 kB' 'Mapped: 207688 kB' 'Shmem: 13235012 kB' 'KReclaimable: 478816 kB' 'Slab: 888824 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410008 kB' 'KernelStack: 12864 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14943252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.773 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.773 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.774 12:30:38 -- setup/common.sh@33 -- # echo 0 00:02:39.774 12:30:38 -- setup/common.sh@33 -- # return 0 00:02:39.774 12:30:38 -- setup/hugepages.sh@99 -- # surp=0 00:02:39.774 12:30:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:39.774 12:30:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:39.774 12:30:38 -- setup/common.sh@18 -- # local node= 00:02:39.774 12:30:38 -- setup/common.sh@19 -- # local var val 00:02:39.774 12:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.774 12:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.774 12:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.774 12:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.774 12:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.774 12:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37173768 kB' 'MemAvailable: 42307320 kB' 'Buffers: 3748 kB' 'Cached: 18530652 kB' 'SwapCached: 0 kB' 'Active: 14423376 kB' 'Inactive: 4650840 kB' 'Active(anon): 13774828 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543188 kB' 'Mapped: 207652 kB' 'Shmem: 13235012 kB' 'KReclaimable: 478816 kB' 'Slab: 888824 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410008 kB' 'KernelStack: 12896 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14943268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.774 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.774 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.775 12:30:38 -- setup/common.sh@33 -- # echo 0 00:02:39.775 12:30:38 -- setup/common.sh@33 -- # return 0 00:02:39.775 12:30:38 -- setup/hugepages.sh@100 -- # resv=0 00:02:39.775 12:30:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:39.775 nr_hugepages=1024 00:02:39.775 12:30:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:39.775 resv_hugepages=0 00:02:39.775 12:30:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:39.775 surplus_hugepages=0 00:02:39.775 12:30:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:39.775 anon_hugepages=0 00:02:39.775 12:30:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.775 12:30:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:39.775 12:30:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:39.775 12:30:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:39.775 12:30:38 -- setup/common.sh@18 -- # local node= 00:02:39.775 12:30:38 -- setup/common.sh@19 -- # local var val 00:02:39.775 12:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.775 12:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.775 12:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.775 12:30:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.775 12:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.775 12:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37174504 kB' 'MemAvailable: 42308056 kB' 'Buffers: 3748 kB' 'Cached: 18530676 kB' 'SwapCached: 0 kB' 'Active: 14422300 kB' 'Inactive: 4650840 kB' 'Active(anon): 13773752 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542040 kB' 'Mapped: 207592 kB' 'Shmem: 13235036 kB' 'KReclaimable: 478816 kB' 'Slab: 888872 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410056 kB' 'KernelStack: 12896 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14943284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.775 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.775 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.776 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.776 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.777 12:30:38 -- setup/common.sh@33 -- # echo 1024 00:02:39.777 12:30:38 -- setup/common.sh@33 -- # return 0 00:02:39.777 12:30:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.777 12:30:38 -- setup/hugepages.sh@112 -- # get_nodes 00:02:39.777 12:30:38 -- setup/hugepages.sh@27 -- # local node 00:02:39.777 12:30:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.777 12:30:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:39.777 12:30:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.777 12:30:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:39.777 12:30:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.777 12:30:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.777 12:30:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.777 12:30:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.777 12:30:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:39.777 12:30:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.777 12:30:38 -- setup/common.sh@18 -- # local node=0 00:02:39.777 12:30:38 -- setup/common.sh@19 -- # local var val 00:02:39.777 12:30:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.777 12:30:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.777 12:30:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:39.777 12:30:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:39.777 12:30:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.777 12:30:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 16779600 kB' 'MemUsed: 16097340 kB' 'SwapCached: 0 kB' 'Active: 8682264 kB' 'Inactive: 4296372 kB' 'Active(anon): 8313616 kB' 'Inactive(anon): 0 kB' 'Active(file): 368648 kB' 'Inactive(file): 4296372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12688588 kB' 'Mapped: 69500 kB' 'AnonPages: 293276 kB' 'Shmem: 8023568 kB' 'KernelStack: 6808 kB' 'PageTables: 5252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 340004 kB' 'Slab: 546768 kB' 'SReclaimable: 340004 kB' 'SUnreclaim: 206764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.777 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.777 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # continue 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.778 12:30:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.778 12:30:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.778 12:30:38 -- setup/common.sh@33 -- # echo 0 00:02:39.778 12:30:38 -- setup/common.sh@33 -- # return 0 00:02:39.778 12:30:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.778 12:30:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.778 12:30:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.778 12:30:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.778 12:30:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:39.778 node0=1024 expecting 1024 00:02:39.778 12:30:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:39.778 00:02:39.778 real 0m3.495s 00:02:39.778 user 0m0.710s 00:02:39.778 sys 0m0.979s 00:02:39.778 12:30:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:39.778 12:30:38 -- common/autotest_common.sh@10 -- # set +x 00:02:39.778 ************************************ 00:02:39.778 END TEST default_setup 00:02:39.778 ************************************ 00:02:39.778 12:30:38 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:39.778 12:30:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:39.778 12:30:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:39.778 12:30:38 -- common/autotest_common.sh@10 -- # set +x 00:02:40.037 ************************************ 00:02:40.037 START TEST per_node_1G_alloc 00:02:40.037 ************************************ 00:02:40.037 12:30:38 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:40.037 12:30:38 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:40.037 12:30:38 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:40.037 12:30:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:40.037 12:30:38 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:40.037 12:30:38 -- setup/hugepages.sh@51 -- # shift 00:02:40.037 12:30:38 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:40.037 12:30:38 -- setup/hugepages.sh@52 -- # local node_ids 00:02:40.037 12:30:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.037 12:30:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:40.037 12:30:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:40.037 12:30:38 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:40.037 12:30:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.037 12:30:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:40.037 12:30:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.037 12:30:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.037 12:30:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.037 12:30:38 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:40.037 12:30:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:40.037 12:30:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:40.037 12:30:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:40.037 12:30:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:40.037 12:30:38 -- setup/hugepages.sh@73 -- # return 0 00:02:40.037 12:30:38 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:40.037 12:30:38 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:40.037 12:30:38 -- setup/hugepages.sh@146 -- # setup output 00:02:40.037 12:30:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.037 12:30:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:41.415 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.415 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.415 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.415 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.415 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.415 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.415 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.415 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.415 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.415 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.415 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.415 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.415 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.415 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.415 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.415 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.415 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.415 12:30:40 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:41.415 12:30:40 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:41.415 12:30:40 -- setup/hugepages.sh@89 -- # local node 00:02:41.415 12:30:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.415 12:30:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.415 12:30:40 -- setup/hugepages.sh@92 -- # local surp 00:02:41.415 12:30:40 -- setup/hugepages.sh@93 -- # local resv 00:02:41.415 12:30:40 -- setup/hugepages.sh@94 -- # local anon 00:02:41.415 12:30:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.415 12:30:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.415 12:30:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.415 12:30:40 -- setup/common.sh@18 -- # local node= 00:02:41.415 12:30:40 -- setup/common.sh@19 -- # local var val 00:02:41.415 12:30:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.415 12:30:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.415 12:30:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.415 12:30:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.415 12:30:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.415 12:30:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.415 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.415 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.415 12:30:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37157860 kB' 'MemAvailable: 42291412 kB' 'Buffers: 3748 kB' 'Cached: 18538928 kB' 'SwapCached: 0 kB' 'Active: 14430040 kB' 'Inactive: 4650840 kB' 'Active(anon): 13781492 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541524 kB' 'Mapped: 207680 kB' 'Shmem: 13243288 kB' 'KReclaimable: 478816 kB' 'Slab: 888836 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410020 kB' 'KernelStack: 12928 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14950140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199912 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:41.415 12:30:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.415 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.415 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.415 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.415 12:30:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.415 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.415 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.415 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.415 12:30:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.415 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.415 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.415 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.416 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.416 12:30:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.416 12:30:40 -- setup/common.sh@33 -- # echo 0 00:02:41.416 12:30:40 -- setup/common.sh@33 -- # return 0 00:02:41.416 12:30:40 -- setup/hugepages.sh@97 -- # anon=0 00:02:41.416 12:30:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.416 12:30:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.416 12:30:40 -- setup/common.sh@18 -- # local node= 00:02:41.416 12:30:40 -- setup/common.sh@19 -- # local var val 00:02:41.416 12:30:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.416 12:30:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.416 12:30:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.416 12:30:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.417 12:30:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.417 12:30:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37162536 kB' 'MemAvailable: 42296088 kB' 'Buffers: 3748 kB' 'Cached: 18538928 kB' 'SwapCached: 0 kB' 'Active: 14430536 kB' 'Inactive: 4650840 kB' 'Active(anon): 13781988 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542040 kB' 'Mapped: 207692 kB' 'Shmem: 13243288 kB' 'KReclaimable: 478816 kB' 'Slab: 888900 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410084 kB' 'KernelStack: 12944 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14950152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199880 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.417 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.417 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.418 12:30:40 -- setup/common.sh@33 -- # echo 0 00:02:41.418 12:30:40 -- setup/common.sh@33 -- # return 0 00:02:41.418 12:30:40 -- setup/hugepages.sh@99 -- # surp=0 00:02:41.418 12:30:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.418 12:30:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.418 12:30:40 -- setup/common.sh@18 -- # local node= 00:02:41.418 12:30:40 -- setup/common.sh@19 -- # local var val 00:02:41.418 12:30:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.418 12:30:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.418 12:30:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.418 12:30:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.418 12:30:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.418 12:30:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37162348 kB' 'MemAvailable: 42295900 kB' 'Buffers: 3748 kB' 'Cached: 18538940 kB' 'SwapCached: 0 kB' 'Active: 14430692 kB' 'Inactive: 4650840 kB' 'Active(anon): 13782144 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542116 kB' 'Mapped: 207616 kB' 'Shmem: 13243300 kB' 'KReclaimable: 478816 kB' 'Slab: 888868 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410052 kB' 'KernelStack: 12960 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14949796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199880 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.418 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.418 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.419 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.419 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.419 12:30:40 -- setup/common.sh@33 -- # echo 0 00:02:41.419 12:30:40 -- setup/common.sh@33 -- # return 0 00:02:41.419 12:30:40 -- setup/hugepages.sh@100 -- # resv=0 00:02:41.419 12:30:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:41.419 nr_hugepages=1024 00:02:41.419 12:30:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.419 resv_hugepages=0 00:02:41.419 12:30:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.419 surplus_hugepages=0 00:02:41.419 12:30:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.419 anon_hugepages=0 00:02:41.419 12:30:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.420 12:30:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:41.420 12:30:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.420 12:30:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.420 12:30:40 -- setup/common.sh@18 -- # local node= 00:02:41.420 12:30:40 -- setup/common.sh@19 -- # local var val 00:02:41.420 12:30:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.420 12:30:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.420 12:30:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.420 12:30:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.420 12:30:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.420 12:30:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37162108 kB' 'MemAvailable: 42295660 kB' 'Buffers: 3748 kB' 'Cached: 18538956 kB' 'SwapCached: 0 kB' 'Active: 14429520 kB' 'Inactive: 4650840 kB' 'Active(anon): 13780972 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540920 kB' 'Mapped: 207616 kB' 'Shmem: 13243316 kB' 'KReclaimable: 478816 kB' 'Slab: 888868 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410052 kB' 'KernelStack: 12896 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14949812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199896 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.680 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.680 12:30:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.681 12:30:40 -- setup/common.sh@33 -- # echo 1024 00:02:41.681 12:30:40 -- setup/common.sh@33 -- # return 0 00:02:41.681 12:30:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.681 12:30:40 -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.681 12:30:40 -- setup/hugepages.sh@27 -- # local node 00:02:41.681 12:30:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.681 12:30:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.681 12:30:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.681 12:30:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.681 12:30:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.681 12:30:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.681 12:30:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.681 12:30:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.681 12:30:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.681 12:30:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.681 12:30:40 -- setup/common.sh@18 -- # local node=0 00:02:41.681 12:30:40 -- setup/common.sh@19 -- # local var val 00:02:41.681 12:30:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.681 12:30:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.681 12:30:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.681 12:30:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.681 12:30:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.681 12:30:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 17831708 kB' 'MemUsed: 15045232 kB' 'SwapCached: 0 kB' 'Active: 8682872 kB' 'Inactive: 4296372 kB' 'Active(anon): 8314224 kB' 'Inactive(anon): 0 kB' 'Active(file): 368648 kB' 'Inactive(file): 4296372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12688664 kB' 'Mapped: 69500 kB' 'AnonPages: 293856 kB' 'Shmem: 8023644 kB' 'KernelStack: 6920 kB' 'PageTables: 5456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 340004 kB' 'Slab: 546836 kB' 'SReclaimable: 340004 kB' 'SUnreclaim: 206832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.681 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.681 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@33 -- # echo 0 00:02:41.682 12:30:40 -- setup/common.sh@33 -- # return 0 00:02:41.682 12:30:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.682 12:30:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.682 12:30:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.682 12:30:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:41.682 12:30:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.682 12:30:40 -- setup/common.sh@18 -- # local node=1 00:02:41.682 12:30:40 -- setup/common.sh@19 -- # local var val 00:02:41.682 12:30:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.682 12:30:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.682 12:30:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:41.682 12:30:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:41.682 12:30:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.682 12:30:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 19336892 kB' 'MemUsed: 8327896 kB' 'SwapCached: 0 kB' 'Active: 5747028 kB' 'Inactive: 354468 kB' 'Active(anon): 5467128 kB' 'Inactive(anon): 0 kB' 'Active(file): 279900 kB' 'Inactive(file): 354468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5854060 kB' 'Mapped: 138116 kB' 'AnonPages: 247500 kB' 'Shmem: 5219692 kB' 'KernelStack: 6040 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138812 kB' 'Slab: 342032 kB' 'SReclaimable: 138812 kB' 'SUnreclaim: 203220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.682 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.682 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # continue 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.683 12:30:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.683 12:30:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.683 12:30:40 -- setup/common.sh@33 -- # echo 0 00:02:41.683 12:30:40 -- setup/common.sh@33 -- # return 0 00:02:41.683 12:30:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.683 12:30:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.683 12:30:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.683 12:30:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.683 12:30:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:41.683 node0=512 expecting 512 00:02:41.683 12:30:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.683 12:30:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.683 12:30:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.683 12:30:40 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:41.683 node1=512 expecting 512 00:02:41.683 12:30:40 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:41.683 00:02:41.683 real 0m1.608s 00:02:41.683 user 0m0.676s 00:02:41.683 sys 0m0.898s 00:02:41.683 12:30:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:41.683 12:30:40 -- common/autotest_common.sh@10 -- # set +x 00:02:41.683 ************************************ 00:02:41.683 END TEST per_node_1G_alloc 00:02:41.683 ************************************ 00:02:41.683 12:30:40 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:41.683 12:30:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:41.683 12:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:41.683 12:30:40 -- common/autotest_common.sh@10 -- # set +x 00:02:41.683 ************************************ 00:02:41.683 START TEST even_2G_alloc 00:02:41.683 ************************************ 00:02:41.683 12:30:40 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:41.683 12:30:40 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:41.683 12:30:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.683 12:30:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.683 12:30:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.683 12:30:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.683 12:30:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.683 12:30:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.683 12:30:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.684 12:30:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.684 12:30:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.684 12:30:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.684 12:30:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.684 12:30:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.684 12:30:40 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:41.684 12:30:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.684 12:30:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:41.684 12:30:40 -- setup/hugepages.sh@83 -- # : 512 00:02:41.684 12:30:40 -- setup/hugepages.sh@84 -- # : 1 00:02:41.684 12:30:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.684 12:30:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:41.684 12:30:40 -- setup/hugepages.sh@83 -- # : 0 00:02:41.684 12:30:40 -- setup/hugepages.sh@84 -- # : 0 00:02:41.684 12:30:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.684 12:30:40 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:41.684 12:30:40 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:41.684 12:30:40 -- setup/hugepages.sh@153 -- # setup output 00:02:41.684 12:30:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.684 12:30:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.058 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.058 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.058 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.058 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.058 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.058 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.058 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.058 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.058 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.058 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.058 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.058 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.058 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.058 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.058 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.058 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.058 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.321 12:30:42 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:43.321 12:30:42 -- setup/hugepages.sh@89 -- # local node 00:02:43.321 12:30:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.321 12:30:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.321 12:30:42 -- setup/hugepages.sh@92 -- # local surp 00:02:43.321 12:30:42 -- setup/hugepages.sh@93 -- # local resv 00:02:43.321 12:30:42 -- setup/hugepages.sh@94 -- # local anon 00:02:43.321 12:30:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.321 12:30:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.321 12:30:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.321 12:30:42 -- setup/common.sh@18 -- # local node= 00:02:43.321 12:30:42 -- setup/common.sh@19 -- # local var val 00:02:43.321 12:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.321 12:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.321 12:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.321 12:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.321 12:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.321 12:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37175164 kB' 'MemAvailable: 42308716 kB' 'Buffers: 3748 kB' 'Cached: 18539024 kB' 'SwapCached: 0 kB' 'Active: 14428188 kB' 'Inactive: 4650840 kB' 'Active(anon): 13779640 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539456 kB' 'Mapped: 207668 kB' 'Shmem: 13243384 kB' 'KReclaimable: 478816 kB' 'Slab: 888852 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410036 kB' 'KernelStack: 12928 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14948240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.321 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.321 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.322 12:30:42 -- setup/common.sh@33 -- # echo 0 00:02:43.322 12:30:42 -- setup/common.sh@33 -- # return 0 00:02:43.322 12:30:42 -- setup/hugepages.sh@97 -- # anon=0 00:02:43.322 12:30:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.322 12:30:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.322 12:30:42 -- setup/common.sh@18 -- # local node= 00:02:43.322 12:30:42 -- setup/common.sh@19 -- # local var val 00:02:43.322 12:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.322 12:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.322 12:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.322 12:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.322 12:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.322 12:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37177672 kB' 'MemAvailable: 42311224 kB' 'Buffers: 3748 kB' 'Cached: 18539028 kB' 'SwapCached: 0 kB' 'Active: 14428792 kB' 'Inactive: 4650840 kB' 'Active(anon): 13780244 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540192 kB' 'Mapped: 207716 kB' 'Shmem: 13243388 kB' 'KReclaimable: 478816 kB' 'Slab: 888876 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410060 kB' 'KernelStack: 12992 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14948252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200008 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.322 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.322 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.323 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.323 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.324 12:30:42 -- setup/common.sh@33 -- # echo 0 00:02:43.324 12:30:42 -- setup/common.sh@33 -- # return 0 00:02:43.324 12:30:42 -- setup/hugepages.sh@99 -- # surp=0 00:02:43.324 12:30:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.324 12:30:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.324 12:30:42 -- setup/common.sh@18 -- # local node= 00:02:43.324 12:30:42 -- setup/common.sh@19 -- # local var val 00:02:43.324 12:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.324 12:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.324 12:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.324 12:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.324 12:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.324 12:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37177420 kB' 'MemAvailable: 42310972 kB' 'Buffers: 3748 kB' 'Cached: 18539036 kB' 'SwapCached: 0 kB' 'Active: 14428652 kB' 'Inactive: 4650840 kB' 'Active(anon): 13780104 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539984 kB' 'Mapped: 207640 kB' 'Shmem: 13243396 kB' 'KReclaimable: 478816 kB' 'Slab: 888888 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410072 kB' 'KernelStack: 12976 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14948268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200008 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.324 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.324 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.325 12:30:42 -- setup/common.sh@33 -- # echo 0 00:02:43.325 12:30:42 -- setup/common.sh@33 -- # return 0 00:02:43.325 12:30:42 -- setup/hugepages.sh@100 -- # resv=0 00:02:43.325 12:30:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.325 nr_hugepages=1024 00:02:43.325 12:30:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.325 resv_hugepages=0 00:02:43.325 12:30:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.325 surplus_hugepages=0 00:02:43.325 12:30:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.325 anon_hugepages=0 00:02:43.325 12:30:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.325 12:30:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.325 12:30:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.325 12:30:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.325 12:30:42 -- setup/common.sh@18 -- # local node= 00:02:43.325 12:30:42 -- setup/common.sh@19 -- # local var val 00:02:43.325 12:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.325 12:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.325 12:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.325 12:30:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.325 12:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.325 12:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37182876 kB' 'MemAvailable: 42316428 kB' 'Buffers: 3748 kB' 'Cached: 18539036 kB' 'SwapCached: 0 kB' 'Active: 14428296 kB' 'Inactive: 4650840 kB' 'Active(anon): 13779748 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539628 kB' 'Mapped: 207640 kB' 'Shmem: 13243396 kB' 'KReclaimable: 478816 kB' 'Slab: 888888 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410072 kB' 'KernelStack: 12960 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14948280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200008 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.325 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.325 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.326 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.326 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.327 12:30:42 -- setup/common.sh@33 -- # echo 1024 00:02:43.327 12:30:42 -- setup/common.sh@33 -- # return 0 00:02:43.327 12:30:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.327 12:30:42 -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.327 12:30:42 -- setup/hugepages.sh@27 -- # local node 00:02:43.327 12:30:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.327 12:30:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.327 12:30:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.327 12:30:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.327 12:30:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.327 12:30:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.327 12:30:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.327 12:30:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.327 12:30:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.327 12:30:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.327 12:30:42 -- setup/common.sh@18 -- # local node=0 00:02:43.327 12:30:42 -- setup/common.sh@19 -- # local var val 00:02:43.327 12:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.327 12:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.327 12:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.327 12:30:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.327 12:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.327 12:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 17830208 kB' 'MemUsed: 15046732 kB' 'SwapCached: 0 kB' 'Active: 8680876 kB' 'Inactive: 4296372 kB' 'Active(anon): 8312228 kB' 'Inactive(anon): 0 kB' 'Active(file): 368648 kB' 'Inactive(file): 4296372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12688744 kB' 'Mapped: 69500 kB' 'AnonPages: 291768 kB' 'Shmem: 8023724 kB' 'KernelStack: 6936 kB' 'PageTables: 5344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 340004 kB' 'Slab: 546800 kB' 'SReclaimable: 340004 kB' 'SUnreclaim: 206796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.327 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.327 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@33 -- # echo 0 00:02:43.328 12:30:42 -- setup/common.sh@33 -- # return 0 00:02:43.328 12:30:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.328 12:30:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.328 12:30:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.328 12:30:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:43.328 12:30:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.328 12:30:42 -- setup/common.sh@18 -- # local node=1 00:02:43.328 12:30:42 -- setup/common.sh@19 -- # local var val 00:02:43.328 12:30:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.328 12:30:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.328 12:30:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:43.328 12:30:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:43.328 12:30:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.328 12:30:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 19353612 kB' 'MemUsed: 8311176 kB' 'SwapCached: 0 kB' 'Active: 5747768 kB' 'Inactive: 354468 kB' 'Active(anon): 5467868 kB' 'Inactive(anon): 0 kB' 'Active(file): 279900 kB' 'Inactive(file): 354468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5854072 kB' 'Mapped: 138140 kB' 'AnonPages: 248220 kB' 'Shmem: 5219704 kB' 'KernelStack: 6040 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138812 kB' 'Slab: 342088 kB' 'SReclaimable: 138812 kB' 'SUnreclaim: 203276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.328 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.328 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # continue 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.329 12:30:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.329 12:30:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.329 12:30:42 -- setup/common.sh@33 -- # echo 0 00:02:43.329 12:30:42 -- setup/common.sh@33 -- # return 0 00:02:43.329 12:30:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.329 12:30:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.329 12:30:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.329 12:30:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.329 12:30:42 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:43.329 node0=512 expecting 512 00:02:43.329 12:30:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.329 12:30:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.329 12:30:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.329 12:30:42 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:43.329 node1=512 expecting 512 00:02:43.329 12:30:42 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:43.329 00:02:43.329 real 0m1.696s 00:02:43.329 user 0m0.704s 00:02:43.329 sys 0m0.958s 00:02:43.329 12:30:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:43.329 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:02:43.329 ************************************ 00:02:43.329 END TEST even_2G_alloc 00:02:43.329 ************************************ 00:02:43.588 12:30:42 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:43.588 12:30:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.588 12:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.588 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:02:43.588 ************************************ 00:02:43.588 START TEST odd_alloc 00:02:43.588 ************************************ 00:02:43.588 12:30:42 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:43.588 12:30:42 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:43.588 12:30:42 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:43.588 12:30:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:43.588 12:30:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.588 12:30:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:43.588 12:30:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:43.588 12:30:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:43.588 12:30:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.588 12:30:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:43.588 12:30:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.588 12:30:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.588 12:30:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.588 12:30:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:43.588 12:30:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:43.588 12:30:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.588 12:30:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:43.588 12:30:42 -- setup/hugepages.sh@83 -- # : 513 00:02:43.588 12:30:42 -- setup/hugepages.sh@84 -- # : 1 00:02:43.588 12:30:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.588 12:30:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:43.588 12:30:42 -- setup/hugepages.sh@83 -- # : 0 00:02:43.588 12:30:42 -- setup/hugepages.sh@84 -- # : 0 00:02:43.588 12:30:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.588 12:30:42 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:43.588 12:30:42 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:43.588 12:30:42 -- setup/hugepages.sh@160 -- # setup output 00:02:43.588 12:30:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.588 12:30:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.970 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.970 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.970 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.970 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.970 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.970 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.970 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.970 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.970 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.970 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.970 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.970 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.970 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.970 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.970 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.970 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.970 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.970 12:30:43 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:44.970 12:30:43 -- setup/hugepages.sh@89 -- # local node 00:02:44.970 12:30:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.970 12:30:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.970 12:30:43 -- setup/hugepages.sh@92 -- # local surp 00:02:44.970 12:30:43 -- setup/hugepages.sh@93 -- # local resv 00:02:44.970 12:30:43 -- setup/hugepages.sh@94 -- # local anon 00:02:44.970 12:30:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.970 12:30:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.970 12:30:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.970 12:30:43 -- setup/common.sh@18 -- # local node= 00:02:44.970 12:30:43 -- setup/common.sh@19 -- # local var val 00:02:44.970 12:30:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.970 12:30:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.970 12:30:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.970 12:30:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.970 12:30:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.970 12:30:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37169044 kB' 'MemAvailable: 42302596 kB' 'Buffers: 3748 kB' 'Cached: 18539120 kB' 'SwapCached: 0 kB' 'Active: 14425268 kB' 'Inactive: 4650840 kB' 'Active(anon): 13776720 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536544 kB' 'Mapped: 206712 kB' 'Shmem: 13243480 kB' 'KReclaimable: 478816 kB' 'Slab: 888688 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 409872 kB' 'KernelStack: 12912 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14935412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199976 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.970 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.970 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.971 12:30:43 -- setup/common.sh@33 -- # echo 0 00:02:44.971 12:30:43 -- setup/common.sh@33 -- # return 0 00:02:44.971 12:30:43 -- setup/hugepages.sh@97 -- # anon=0 00:02:44.971 12:30:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.971 12:30:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.971 12:30:43 -- setup/common.sh@18 -- # local node= 00:02:44.971 12:30:43 -- setup/common.sh@19 -- # local var val 00:02:44.971 12:30:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.971 12:30:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.971 12:30:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.971 12:30:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.971 12:30:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.971 12:30:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37169552 kB' 'MemAvailable: 42303104 kB' 'Buffers: 3748 kB' 'Cached: 18539124 kB' 'SwapCached: 0 kB' 'Active: 14425592 kB' 'Inactive: 4650840 kB' 'Active(anon): 13777044 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536876 kB' 'Mapped: 206684 kB' 'Shmem: 13243484 kB' 'KReclaimable: 478816 kB' 'Slab: 888664 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 409848 kB' 'KernelStack: 12944 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14935424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199944 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.971 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.971 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.972 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.972 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.973 12:30:44 -- setup/common.sh@33 -- # echo 0 00:02:44.973 12:30:44 -- setup/common.sh@33 -- # return 0 00:02:44.973 12:30:44 -- setup/hugepages.sh@99 -- # surp=0 00:02:44.973 12:30:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.973 12:30:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.973 12:30:44 -- setup/common.sh@18 -- # local node= 00:02:44.973 12:30:44 -- setup/common.sh@19 -- # local var val 00:02:44.973 12:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.973 12:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.973 12:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.973 12:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.973 12:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.973 12:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37169300 kB' 'MemAvailable: 42302852 kB' 'Buffers: 3748 kB' 'Cached: 18539136 kB' 'SwapCached: 0 kB' 'Active: 14425488 kB' 'Inactive: 4650840 kB' 'Active(anon): 13776940 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536708 kB' 'Mapped: 206608 kB' 'Shmem: 13243496 kB' 'KReclaimable: 478816 kB' 'Slab: 888672 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 409856 kB' 'KernelStack: 12960 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14935440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199960 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.973 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.973 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.974 12:30:44 -- setup/common.sh@32 -- # continue 00:02:44.974 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.235 12:30:44 -- setup/common.sh@33 -- # echo 0 00:02:45.235 12:30:44 -- setup/common.sh@33 -- # return 0 00:02:45.235 12:30:44 -- setup/hugepages.sh@100 -- # resv=0 00:02:45.235 12:30:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:45.235 nr_hugepages=1025 00:02:45.235 12:30:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:45.235 resv_hugepages=0 00:02:45.235 12:30:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:45.235 surplus_hugepages=0 00:02:45.235 12:30:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:45.235 anon_hugepages=0 00:02:45.235 12:30:44 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:45.235 12:30:44 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:45.235 12:30:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:45.235 12:30:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:45.235 12:30:44 -- setup/common.sh@18 -- # local node= 00:02:45.235 12:30:44 -- setup/common.sh@19 -- # local var val 00:02:45.235 12:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.235 12:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.235 12:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.235 12:30:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.235 12:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.235 12:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.235 12:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37170348 kB' 'MemAvailable: 42303900 kB' 'Buffers: 3748 kB' 'Cached: 18539148 kB' 'SwapCached: 0 kB' 'Active: 14425460 kB' 'Inactive: 4650840 kB' 'Active(anon): 13776912 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536700 kB' 'Mapped: 206608 kB' 'Shmem: 13243508 kB' 'KReclaimable: 478816 kB' 'Slab: 888672 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 409856 kB' 'KernelStack: 12960 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14935452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199960 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.235 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.235 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.236 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.236 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.237 12:30:44 -- setup/common.sh@33 -- # echo 1025 00:02:45.237 12:30:44 -- setup/common.sh@33 -- # return 0 00:02:45.237 12:30:44 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:45.237 12:30:44 -- setup/hugepages.sh@112 -- # get_nodes 00:02:45.237 12:30:44 -- setup/hugepages.sh@27 -- # local node 00:02:45.237 12:30:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.237 12:30:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:45.237 12:30:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.237 12:30:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:45.237 12:30:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.237 12:30:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.237 12:30:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.237 12:30:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.237 12:30:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:45.237 12:30:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.237 12:30:44 -- setup/common.sh@18 -- # local node=0 00:02:45.237 12:30:44 -- setup/common.sh@19 -- # local var val 00:02:45.237 12:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.237 12:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.237 12:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.237 12:30:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.237 12:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.237 12:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 17815292 kB' 'MemUsed: 15061648 kB' 'SwapCached: 0 kB' 'Active: 8678252 kB' 'Inactive: 4296372 kB' 'Active(anon): 8309604 kB' 'Inactive(anon): 0 kB' 'Active(file): 368648 kB' 'Inactive(file): 4296372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12688768 kB' 'Mapped: 68540 kB' 'AnonPages: 289072 kB' 'Shmem: 8023748 kB' 'KernelStack: 6904 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 340004 kB' 'Slab: 546624 kB' 'SReclaimable: 340004 kB' 'SUnreclaim: 206620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.237 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.237 12:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@33 -- # echo 0 00:02:45.238 12:30:44 -- setup/common.sh@33 -- # return 0 00:02:45.238 12:30:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.238 12:30:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.238 12:30:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.238 12:30:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:45.238 12:30:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.238 12:30:44 -- setup/common.sh@18 -- # local node=1 00:02:45.238 12:30:44 -- setup/common.sh@19 -- # local var val 00:02:45.238 12:30:44 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.238 12:30:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.238 12:30:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:45.238 12:30:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:45.238 12:30:44 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.238 12:30:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 19355056 kB' 'MemUsed: 8309732 kB' 'SwapCached: 0 kB' 'Active: 5747180 kB' 'Inactive: 354468 kB' 'Active(anon): 5467280 kB' 'Inactive(anon): 0 kB' 'Active(file): 279900 kB' 'Inactive(file): 354468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5854156 kB' 'Mapped: 138068 kB' 'AnonPages: 247584 kB' 'Shmem: 5219788 kB' 'KernelStack: 6040 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138812 kB' 'Slab: 342048 kB' 'SReclaimable: 138812 kB' 'SUnreclaim: 203236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.238 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.238 12:30:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # continue 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.239 12:30:44 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.239 12:30:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.239 12:30:44 -- setup/common.sh@33 -- # echo 0 00:02:45.239 12:30:44 -- setup/common.sh@33 -- # return 0 00:02:45.239 12:30:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.239 12:30:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.239 12:30:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.239 12:30:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:45.239 node0=512 expecting 513 00:02:45.239 12:30:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.239 12:30:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.239 12:30:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.239 12:30:44 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:45.239 node1=513 expecting 512 00:02:45.239 12:30:44 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:45.239 00:02:45.239 real 0m1.624s 00:02:45.239 user 0m0.684s 00:02:45.239 sys 0m0.905s 00:02:45.239 12:30:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:45.239 12:30:44 -- common/autotest_common.sh@10 -- # set +x 00:02:45.239 ************************************ 00:02:45.239 END TEST odd_alloc 00:02:45.239 ************************************ 00:02:45.239 12:30:44 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:45.239 12:30:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:45.239 12:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:45.239 12:30:44 -- common/autotest_common.sh@10 -- # set +x 00:02:45.239 ************************************ 00:02:45.239 START TEST custom_alloc 00:02:45.239 ************************************ 00:02:45.239 12:30:44 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:45.239 12:30:44 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:45.239 12:30:44 -- setup/hugepages.sh@169 -- # local node 00:02:45.239 12:30:44 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:45.239 12:30:44 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:45.239 12:30:44 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:45.239 12:30:44 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:45.239 12:30:44 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:45.239 12:30:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:45.239 12:30:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:45.239 12:30:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:45.239 12:30:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.239 12:30:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:45.239 12:30:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.239 12:30:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.239 12:30:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.239 12:30:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:45.239 12:30:44 -- setup/hugepages.sh@83 -- # : 256 00:02:45.239 12:30:44 -- setup/hugepages.sh@84 -- # : 1 00:02:45.239 12:30:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:45.239 12:30:44 -- setup/hugepages.sh@83 -- # : 0 00:02:45.239 12:30:44 -- setup/hugepages.sh@84 -- # : 0 00:02:45.239 12:30:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:45.239 12:30:44 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:45.239 12:30:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:45.239 12:30:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:45.239 12:30:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:45.239 12:30:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:45.239 12:30:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.239 12:30:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:45.239 12:30:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.239 12:30:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.239 12:30:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.239 12:30:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.239 12:30:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:45.239 12:30:44 -- setup/hugepages.sh@78 -- # return 0 00:02:45.239 12:30:44 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:45.239 12:30:44 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:45.239 12:30:44 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:45.239 12:30:44 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:45.239 12:30:44 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:45.239 12:30:44 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:45.239 12:30:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:45.239 12:30:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.239 12:30:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:45.239 12:30:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.239 12:30:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.239 12:30:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.239 12:30:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:45.239 12:30:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.239 12:30:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:45.239 12:30:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:45.239 12:30:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:45.239 12:30:44 -- setup/hugepages.sh@78 -- # return 0 00:02:45.239 12:30:44 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:45.239 12:30:44 -- setup/hugepages.sh@187 -- # setup output 00:02:45.239 12:30:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.239 12:30:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.616 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.616 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.616 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.616 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.616 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.616 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.616 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.616 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.616 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.616 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.616 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.616 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.616 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.616 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.616 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.616 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.616 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.616 12:30:45 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:46.616 12:30:45 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:46.616 12:30:45 -- setup/hugepages.sh@89 -- # local node 00:02:46.616 12:30:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.616 12:30:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.616 12:30:45 -- setup/hugepages.sh@92 -- # local surp 00:02:46.616 12:30:45 -- setup/hugepages.sh@93 -- # local resv 00:02:46.616 12:30:45 -- setup/hugepages.sh@94 -- # local anon 00:02:46.616 12:30:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.616 12:30:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.616 12:30:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.616 12:30:45 -- setup/common.sh@18 -- # local node= 00:02:46.616 12:30:45 -- setup/common.sh@19 -- # local var val 00:02:46.616 12:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.616 12:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.616 12:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.617 12:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.617 12:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.617 12:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36116404 kB' 'MemAvailable: 41249956 kB' 'Buffers: 3748 kB' 'Cached: 18539220 kB' 'SwapCached: 0 kB' 'Active: 14425908 kB' 'Inactive: 4650840 kB' 'Active(anon): 13777360 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536472 kB' 'Mapped: 206648 kB' 'Shmem: 13243580 kB' 'KReclaimable: 478816 kB' 'Slab: 888860 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410044 kB' 'KernelStack: 12928 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14935808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.617 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.617 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.618 12:30:45 -- setup/common.sh@33 -- # echo 0 00:02:46.618 12:30:45 -- setup/common.sh@33 -- # return 0 00:02:46.618 12:30:45 -- setup/hugepages.sh@97 -- # anon=0 00:02:46.618 12:30:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.618 12:30:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.618 12:30:45 -- setup/common.sh@18 -- # local node= 00:02:46.618 12:30:45 -- setup/common.sh@19 -- # local var val 00:02:46.618 12:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.618 12:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.618 12:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.618 12:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.618 12:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.618 12:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36116152 kB' 'MemAvailable: 41249704 kB' 'Buffers: 3748 kB' 'Cached: 18539224 kB' 'SwapCached: 0 kB' 'Active: 14426364 kB' 'Inactive: 4650840 kB' 'Active(anon): 13777816 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537004 kB' 'Mapped: 206724 kB' 'Shmem: 13243584 kB' 'KReclaimable: 478816 kB' 'Slab: 888860 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410044 kB' 'KernelStack: 12928 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14935820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200024 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.618 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.618 12:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.619 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.619 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.619 12:30:45 -- setup/common.sh@33 -- # echo 0 00:02:46.619 12:30:45 -- setup/common.sh@33 -- # return 0 00:02:46.880 12:30:45 -- setup/hugepages.sh@99 -- # surp=0 00:02:46.880 12:30:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.880 12:30:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.880 12:30:45 -- setup/common.sh@18 -- # local node= 00:02:46.880 12:30:45 -- setup/common.sh@19 -- # local var val 00:02:46.880 12:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.880 12:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.881 12:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.881 12:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.881 12:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.881 12:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36116264 kB' 'MemAvailable: 41249816 kB' 'Buffers: 3748 kB' 'Cached: 18539236 kB' 'SwapCached: 0 kB' 'Active: 14425748 kB' 'Inactive: 4650840 kB' 'Active(anon): 13777200 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536768 kB' 'Mapped: 206632 kB' 'Shmem: 13243596 kB' 'KReclaimable: 478816 kB' 'Slab: 888860 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410044 kB' 'KernelStack: 12944 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14935836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.881 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.881 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.882 12:30:45 -- setup/common.sh@33 -- # echo 0 00:02:46.882 12:30:45 -- setup/common.sh@33 -- # return 0 00:02:46.882 12:30:45 -- setup/hugepages.sh@100 -- # resv=0 00:02:46.882 12:30:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:46.882 nr_hugepages=1536 00:02:46.882 12:30:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.882 resv_hugepages=0 00:02:46.882 12:30:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.882 surplus_hugepages=0 00:02:46.882 12:30:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.882 anon_hugepages=0 00:02:46.882 12:30:45 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:46.882 12:30:45 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:46.882 12:30:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.882 12:30:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.882 12:30:45 -- setup/common.sh@18 -- # local node= 00:02:46.882 12:30:45 -- setup/common.sh@19 -- # local var val 00:02:46.882 12:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.882 12:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.882 12:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.882 12:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.882 12:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.882 12:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36115508 kB' 'MemAvailable: 41249060 kB' 'Buffers: 3748 kB' 'Cached: 18539248 kB' 'SwapCached: 0 kB' 'Active: 14425576 kB' 'Inactive: 4650840 kB' 'Active(anon): 13777028 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536588 kB' 'Mapped: 206632 kB' 'Shmem: 13243608 kB' 'KReclaimable: 478816 kB' 'Slab: 888860 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410044 kB' 'KernelStack: 12960 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14935848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.882 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.882 12:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.883 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.883 12:30:45 -- setup/common.sh@33 -- # echo 1536 00:02:46.883 12:30:45 -- setup/common.sh@33 -- # return 0 00:02:46.883 12:30:45 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:46.883 12:30:45 -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.883 12:30:45 -- setup/hugepages.sh@27 -- # local node 00:02:46.883 12:30:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.883 12:30:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:46.883 12:30:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.883 12:30:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:46.883 12:30:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.883 12:30:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.883 12:30:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.883 12:30:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.883 12:30:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.883 12:30:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.883 12:30:45 -- setup/common.sh@18 -- # local node=0 00:02:46.883 12:30:45 -- setup/common.sh@19 -- # local var val 00:02:46.883 12:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.883 12:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.883 12:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.883 12:30:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.883 12:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.883 12:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.883 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 17792660 kB' 'MemUsed: 15084280 kB' 'SwapCached: 0 kB' 'Active: 8678056 kB' 'Inactive: 4296372 kB' 'Active(anon): 8309408 kB' 'Inactive(anon): 0 kB' 'Active(file): 368648 kB' 'Inactive(file): 4296372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12688768 kB' 'Mapped: 68540 kB' 'AnonPages: 288796 kB' 'Shmem: 8023748 kB' 'KernelStack: 6872 kB' 'PageTables: 4956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 340004 kB' 'Slab: 546756 kB' 'SReclaimable: 340004 kB' 'SUnreclaim: 206752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.884 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.884 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.884 12:30:45 -- setup/common.sh@33 -- # echo 0 00:02:46.884 12:30:45 -- setup/common.sh@33 -- # return 0 00:02:46.884 12:30:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.884 12:30:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.884 12:30:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.884 12:30:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:46.884 12:30:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.884 12:30:45 -- setup/common.sh@18 -- # local node=1 00:02:46.884 12:30:45 -- setup/common.sh@19 -- # local var val 00:02:46.884 12:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.884 12:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.884 12:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:46.884 12:30:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:46.884 12:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.884 12:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 18322848 kB' 'MemUsed: 9341940 kB' 'SwapCached: 0 kB' 'Active: 5747796 kB' 'Inactive: 354468 kB' 'Active(anon): 5467896 kB' 'Inactive(anon): 0 kB' 'Active(file): 279900 kB' 'Inactive(file): 354468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5854256 kB' 'Mapped: 138092 kB' 'AnonPages: 248056 kB' 'Shmem: 5219888 kB' 'KernelStack: 6104 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138812 kB' 'Slab: 342104 kB' 'SReclaimable: 138812 kB' 'SUnreclaim: 203292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.885 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.885 12:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # continue 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.886 12:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.886 12:30:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.886 12:30:45 -- setup/common.sh@33 -- # echo 0 00:02:46.886 12:30:45 -- setup/common.sh@33 -- # return 0 00:02:46.886 12:30:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.886 12:30:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.886 12:30:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.886 12:30:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.886 12:30:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:46.886 node0=512 expecting 512 00:02:46.886 12:30:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.886 12:30:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.886 12:30:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.886 12:30:45 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:46.886 node1=1024 expecting 1024 00:02:46.886 12:30:45 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:46.886 00:02:46.886 real 0m1.555s 00:02:46.886 user 0m0.655s 00:02:46.886 sys 0m0.865s 00:02:46.886 12:30:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:46.886 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:02:46.886 ************************************ 00:02:46.886 END TEST custom_alloc 00:02:46.886 ************************************ 00:02:46.886 12:30:45 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:46.886 12:30:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:46.886 12:30:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:46.886 12:30:45 -- common/autotest_common.sh@10 -- # set +x 00:02:46.886 ************************************ 00:02:46.886 START TEST no_shrink_alloc 00:02:46.886 ************************************ 00:02:46.886 12:30:45 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:46.886 12:30:45 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:46.886 12:30:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:46.886 12:30:45 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:46.886 12:30:45 -- setup/hugepages.sh@51 -- # shift 00:02:46.886 12:30:45 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:46.886 12:30:45 -- setup/hugepages.sh@52 -- # local node_ids 00:02:46.886 12:30:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.886 12:30:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:46.886 12:30:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:46.886 12:30:45 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:46.886 12:30:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.886 12:30:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.886 12:30:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.886 12:30:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.886 12:30:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.886 12:30:45 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:46.886 12:30:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:46.886 12:30:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:46.886 12:30:45 -- setup/hugepages.sh@73 -- # return 0 00:02:46.886 12:30:45 -- setup/hugepages.sh@198 -- # setup output 00:02:46.886 12:30:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.886 12:30:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.262 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.262 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.262 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.262 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.262 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.262 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.262 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.262 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.262 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.262 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.262 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.262 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.262 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.262 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.262 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.262 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.262 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.524 12:30:47 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:48.524 12:30:47 -- setup/hugepages.sh@89 -- # local node 00:02:48.524 12:30:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.524 12:30:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.524 12:30:47 -- setup/hugepages.sh@92 -- # local surp 00:02:48.524 12:30:47 -- setup/hugepages.sh@93 -- # local resv 00:02:48.524 12:30:47 -- setup/hugepages.sh@94 -- # local anon 00:02:48.524 12:30:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.524 12:30:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.524 12:30:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.524 12:30:47 -- setup/common.sh@18 -- # local node= 00:02:48.524 12:30:47 -- setup/common.sh@19 -- # local var val 00:02:48.524 12:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.524 12:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.524 12:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.524 12:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.524 12:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.524 12:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37154536 kB' 'MemAvailable: 42288088 kB' 'Buffers: 3748 kB' 'Cached: 18539316 kB' 'SwapCached: 0 kB' 'Active: 14426940 kB' 'Inactive: 4650840 kB' 'Active(anon): 13778392 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537908 kB' 'Mapped: 206688 kB' 'Shmem: 13243676 kB' 'KReclaimable: 478816 kB' 'Slab: 888660 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 409844 kB' 'KernelStack: 12960 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200088 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.524 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.524 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.525 12:30:47 -- setup/common.sh@33 -- # echo 0 00:02:48.525 12:30:47 -- setup/common.sh@33 -- # return 0 00:02:48.525 12:30:47 -- setup/hugepages.sh@97 -- # anon=0 00:02:48.525 12:30:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.525 12:30:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.525 12:30:47 -- setup/common.sh@18 -- # local node= 00:02:48.525 12:30:47 -- setup/common.sh@19 -- # local var val 00:02:48.525 12:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.525 12:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.525 12:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.525 12:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.525 12:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.525 12:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37154980 kB' 'MemAvailable: 42288532 kB' 'Buffers: 3748 kB' 'Cached: 18539320 kB' 'SwapCached: 0 kB' 'Active: 14427136 kB' 'Inactive: 4650840 kB' 'Active(anon): 13778588 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538152 kB' 'Mapped: 206764 kB' 'Shmem: 13243680 kB' 'KReclaimable: 478816 kB' 'Slab: 888652 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 409836 kB' 'KernelStack: 12928 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200056 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.525 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.525 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.526 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.526 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.527 12:30:47 -- setup/common.sh@33 -- # echo 0 00:02:48.527 12:30:47 -- setup/common.sh@33 -- # return 0 00:02:48.527 12:30:47 -- setup/hugepages.sh@99 -- # surp=0 00:02:48.527 12:30:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.527 12:30:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.527 12:30:47 -- setup/common.sh@18 -- # local node= 00:02:48.527 12:30:47 -- setup/common.sh@19 -- # local var val 00:02:48.527 12:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.527 12:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.527 12:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.527 12:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.527 12:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.527 12:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37154888 kB' 'MemAvailable: 42288440 kB' 'Buffers: 3748 kB' 'Cached: 18539332 kB' 'SwapCached: 0 kB' 'Active: 14427120 kB' 'Inactive: 4650840 kB' 'Active(anon): 13778572 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538112 kB' 'Mapped: 206656 kB' 'Shmem: 13243692 kB' 'KReclaimable: 478816 kB' 'Slab: 888656 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 409840 kB' 'KernelStack: 12976 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.527 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.527 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.528 12:30:47 -- setup/common.sh@33 -- # echo 0 00:02:48.528 12:30:47 -- setup/common.sh@33 -- # return 0 00:02:48.528 12:30:47 -- setup/hugepages.sh@100 -- # resv=0 00:02:48.528 12:30:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.528 nr_hugepages=1024 00:02:48.528 12:30:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.528 resv_hugepages=0 00:02:48.528 12:30:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.528 surplus_hugepages=0 00:02:48.528 12:30:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.528 anon_hugepages=0 00:02:48.528 12:30:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.528 12:30:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.528 12:30:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.528 12:30:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.528 12:30:47 -- setup/common.sh@18 -- # local node= 00:02:48.528 12:30:47 -- setup/common.sh@19 -- # local var val 00:02:48.528 12:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.528 12:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.528 12:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.528 12:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.528 12:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.528 12:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37154924 kB' 'MemAvailable: 42288476 kB' 'Buffers: 3748 kB' 'Cached: 18539344 kB' 'SwapCached: 0 kB' 'Active: 14426728 kB' 'Inactive: 4650840 kB' 'Active(anon): 13778180 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537688 kB' 'Mapped: 206656 kB' 'Shmem: 13243704 kB' 'KReclaimable: 478816 kB' 'Slab: 888656 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 409840 kB' 'KernelStack: 12944 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200040 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.528 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.528 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.529 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.529 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.530 12:30:47 -- setup/common.sh@33 -- # echo 1024 00:02:48.530 12:30:47 -- setup/common.sh@33 -- # return 0 00:02:48.530 12:30:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.530 12:30:47 -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.530 12:30:47 -- setup/hugepages.sh@27 -- # local node 00:02:48.530 12:30:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.530 12:30:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.530 12:30:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.530 12:30:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:48.530 12:30:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.530 12:30:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.530 12:30:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.530 12:30:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.530 12:30:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.530 12:30:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.530 12:30:47 -- setup/common.sh@18 -- # local node=0 00:02:48.530 12:30:47 -- setup/common.sh@19 -- # local var val 00:02:48.530 12:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.530 12:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.530 12:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.530 12:30:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.530 12:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.530 12:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 16757636 kB' 'MemUsed: 16119304 kB' 'SwapCached: 0 kB' 'Active: 8678852 kB' 'Inactive: 4296372 kB' 'Active(anon): 8310204 kB' 'Inactive(anon): 0 kB' 'Active(file): 368648 kB' 'Inactive(file): 4296372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12688776 kB' 'Mapped: 68540 kB' 'AnonPages: 289620 kB' 'Shmem: 8023756 kB' 'KernelStack: 6936 kB' 'PageTables: 5048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 340004 kB' 'Slab: 546700 kB' 'SReclaimable: 340004 kB' 'SUnreclaim: 206696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.530 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.530 12:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # continue 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.531 12:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.531 12:30:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.531 12:30:47 -- setup/common.sh@33 -- # echo 0 00:02:48.531 12:30:47 -- setup/common.sh@33 -- # return 0 00:02:48.531 12:30:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.531 12:30:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.531 12:30:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.531 12:30:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.531 12:30:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:48.531 node0=1024 expecting 1024 00:02:48.531 12:30:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:48.531 12:30:47 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:48.531 12:30:47 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:48.531 12:30:47 -- setup/hugepages.sh@202 -- # setup output 00:02:48.531 12:30:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.531 12:30:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.907 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.907 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.907 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.907 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.907 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.907 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.907 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.907 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.907 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.907 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.907 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.907 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.907 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.907 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.907 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.907 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.907 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.907 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:50.171 12:30:48 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:50.171 12:30:48 -- setup/hugepages.sh@89 -- # local node 00:02:50.171 12:30:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.171 12:30:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.171 12:30:48 -- setup/hugepages.sh@92 -- # local surp 00:02:50.171 12:30:48 -- setup/hugepages.sh@93 -- # local resv 00:02:50.171 12:30:48 -- setup/hugepages.sh@94 -- # local anon 00:02:50.171 12:30:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.171 12:30:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.171 12:30:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.171 12:30:48 -- setup/common.sh@18 -- # local node= 00:02:50.171 12:30:48 -- setup/common.sh@19 -- # local var val 00:02:50.171 12:30:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.171 12:30:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.171 12:30:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.171 12:30:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.171 12:30:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.171 12:30:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37146632 kB' 'MemAvailable: 42280184 kB' 'Buffers: 3748 kB' 'Cached: 18539396 kB' 'SwapCached: 0 kB' 'Active: 14427100 kB' 'Inactive: 4650840 kB' 'Active(anon): 13778552 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538412 kB' 'Mapped: 206668 kB' 'Shmem: 13243756 kB' 'KReclaimable: 478816 kB' 'Slab: 889080 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410264 kB' 'KernelStack: 12976 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200152 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.171 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.171 12:30:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.172 12:30:49 -- setup/common.sh@33 -- # echo 0 00:02:50.172 12:30:49 -- setup/common.sh@33 -- # return 0 00:02:50.172 12:30:49 -- setup/hugepages.sh@97 -- # anon=0 00:02:50.172 12:30:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.172 12:30:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.172 12:30:49 -- setup/common.sh@18 -- # local node= 00:02:50.172 12:30:49 -- setup/common.sh@19 -- # local var val 00:02:50.172 12:30:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.172 12:30:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.172 12:30:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.172 12:30:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.172 12:30:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.172 12:30:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37146632 kB' 'MemAvailable: 42280184 kB' 'Buffers: 3748 kB' 'Cached: 18539396 kB' 'SwapCached: 0 kB' 'Active: 14427848 kB' 'Inactive: 4650840 kB' 'Active(anon): 13779300 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538760 kB' 'Mapped: 206744 kB' 'Shmem: 13243756 kB' 'KReclaimable: 478816 kB' 'Slab: 889136 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410320 kB' 'KernelStack: 12960 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200104 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.172 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.172 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.173 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.173 12:30:49 -- setup/common.sh@33 -- # echo 0 00:02:50.173 12:30:49 -- setup/common.sh@33 -- # return 0 00:02:50.173 12:30:49 -- setup/hugepages.sh@99 -- # surp=0 00:02:50.173 12:30:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.173 12:30:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.173 12:30:49 -- setup/common.sh@18 -- # local node= 00:02:50.173 12:30:49 -- setup/common.sh@19 -- # local var val 00:02:50.173 12:30:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.173 12:30:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.173 12:30:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.173 12:30:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.173 12:30:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.173 12:30:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.173 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37146632 kB' 'MemAvailable: 42280184 kB' 'Buffers: 3748 kB' 'Cached: 18539412 kB' 'SwapCached: 0 kB' 'Active: 14427012 kB' 'Inactive: 4650840 kB' 'Active(anon): 13778464 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537876 kB' 'Mapped: 206664 kB' 'Shmem: 13243772 kB' 'KReclaimable: 478816 kB' 'Slab: 889156 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410340 kB' 'KernelStack: 13008 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200104 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.174 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.174 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.175 12:30:49 -- setup/common.sh@33 -- # echo 0 00:02:50.175 12:30:49 -- setup/common.sh@33 -- # return 0 00:02:50.175 12:30:49 -- setup/hugepages.sh@100 -- # resv=0 00:02:50.175 12:30:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:50.175 nr_hugepages=1024 00:02:50.175 12:30:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.175 resv_hugepages=0 00:02:50.175 12:30:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.175 surplus_hugepages=0 00:02:50.175 12:30:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.175 anon_hugepages=0 00:02:50.175 12:30:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.175 12:30:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:50.175 12:30:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.175 12:30:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.175 12:30:49 -- setup/common.sh@18 -- # local node= 00:02:50.175 12:30:49 -- setup/common.sh@19 -- # local var val 00:02:50.175 12:30:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.175 12:30:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.175 12:30:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.175 12:30:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.175 12:30:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.175 12:30:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37146632 kB' 'MemAvailable: 42280184 kB' 'Buffers: 3748 kB' 'Cached: 18539424 kB' 'SwapCached: 0 kB' 'Active: 14426976 kB' 'Inactive: 4650840 kB' 'Active(anon): 13778428 kB' 'Inactive(anon): 0 kB' 'Active(file): 648548 kB' 'Inactive(file): 4650840 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537876 kB' 'Mapped: 206664 kB' 'Shmem: 13243784 kB' 'KReclaimable: 478816 kB' 'Slab: 889156 kB' 'SReclaimable: 478816 kB' 'SUnreclaim: 410340 kB' 'KernelStack: 13008 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14936280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200104 kB' 'VmallocChunk: 0 kB' 'Percpu: 55488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2883164 kB' 'DirectMap2M: 40028160 kB' 'DirectMap1G: 26214400 kB' 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.175 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.175 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.176 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.176 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.176 12:30:49 -- setup/common.sh@33 -- # echo 1024 00:02:50.176 12:30:49 -- setup/common.sh@33 -- # return 0 00:02:50.176 12:30:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.176 12:30:49 -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.176 12:30:49 -- setup/hugepages.sh@27 -- # local node 00:02:50.176 12:30:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.176 12:30:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:50.177 12:30:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.177 12:30:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:50.177 12:30:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.177 12:30:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.177 12:30:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.177 12:30:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.177 12:30:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.177 12:30:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.177 12:30:49 -- setup/common.sh@18 -- # local node=0 00:02:50.177 12:30:49 -- setup/common.sh@19 -- # local var val 00:02:50.177 12:30:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.177 12:30:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.177 12:30:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.177 12:30:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.177 12:30:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.177 12:30:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 16750484 kB' 'MemUsed: 16126456 kB' 'SwapCached: 0 kB' 'Active: 8678716 kB' 'Inactive: 4296372 kB' 'Active(anon): 8310068 kB' 'Inactive(anon): 0 kB' 'Active(file): 368648 kB' 'Inactive(file): 4296372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12688776 kB' 'Mapped: 68544 kB' 'AnonPages: 289384 kB' 'Shmem: 8023756 kB' 'KernelStack: 6920 kB' 'PageTables: 4952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 340004 kB' 'Slab: 546696 kB' 'SReclaimable: 340004 kB' 'SUnreclaim: 206692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.177 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.177 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.178 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.178 12:30:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.178 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.178 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.178 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.178 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.178 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.178 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.178 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.178 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.178 12:30:49 -- setup/common.sh@32 -- # continue 00:02:50.178 12:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.178 12:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.178 12:30:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.178 12:30:49 -- setup/common.sh@33 -- # echo 0 00:02:50.178 12:30:49 -- setup/common.sh@33 -- # return 0 00:02:50.178 12:30:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.178 12:30:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.178 12:30:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.178 12:30:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.178 12:30:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:50.178 node0=1024 expecting 1024 00:02:50.178 12:30:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:50.178 00:02:50.178 real 0m3.205s 00:02:50.178 user 0m1.305s 00:02:50.178 sys 0m1.834s 00:02:50.178 12:30:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:50.178 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:02:50.178 ************************************ 00:02:50.178 END TEST no_shrink_alloc 00:02:50.178 ************************************ 00:02:50.178 12:30:49 -- setup/hugepages.sh@217 -- # clear_hp 00:02:50.178 12:30:49 -- setup/hugepages.sh@37 -- # local node hp 00:02:50.178 12:30:49 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:50.178 12:30:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.178 12:30:49 -- setup/hugepages.sh@41 -- # echo 0 00:02:50.178 12:30:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.178 12:30:49 -- setup/hugepages.sh@41 -- # echo 0 00:02:50.178 12:30:49 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:50.178 12:30:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.178 12:30:49 -- setup/hugepages.sh@41 -- # echo 0 00:02:50.178 12:30:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:50.178 12:30:49 -- setup/hugepages.sh@41 -- # echo 0 00:02:50.178 12:30:49 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:50.178 12:30:49 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:50.178 00:02:50.178 real 0m14.006s 00:02:50.178 user 0m5.047s 00:02:50.178 sys 0m6.891s 00:02:50.178 12:30:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:50.178 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:02:50.178 ************************************ 00:02:50.178 END TEST hugepages 00:02:50.178 ************************************ 00:02:50.178 12:30:49 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:50.178 12:30:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:50.178 12:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:50.178 12:30:49 -- common/autotest_common.sh@10 -- # set +x 00:02:50.436 ************************************ 00:02:50.436 START TEST driver 00:02:50.436 ************************************ 00:02:50.436 12:30:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:50.436 * Looking for test storage... 00:02:50.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.436 12:30:49 -- setup/driver.sh@68 -- # setup reset 00:02:50.436 12:30:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.436 12:30:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.991 12:30:51 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:52.991 12:30:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:52.991 12:30:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:52.991 12:30:51 -- common/autotest_common.sh@10 -- # set +x 00:02:53.254 ************************************ 00:02:53.254 START TEST guess_driver 00:02:53.254 ************************************ 00:02:53.254 12:30:52 -- common/autotest_common.sh@1111 -- # guess_driver 00:02:53.254 12:30:52 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:53.254 12:30:52 -- setup/driver.sh@47 -- # local fail=0 00:02:53.254 12:30:52 -- setup/driver.sh@49 -- # pick_driver 00:02:53.254 12:30:52 -- setup/driver.sh@36 -- # vfio 00:02:53.254 12:30:52 -- setup/driver.sh@21 -- # local iommu_grups 00:02:53.254 12:30:52 -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:53.254 12:30:52 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:53.254 12:30:52 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:53.254 12:30:52 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:53.254 12:30:52 -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:02:53.254 12:30:52 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:53.254 12:30:52 -- setup/driver.sh@14 -- # mod vfio_pci 00:02:53.254 12:30:52 -- setup/driver.sh@12 -- # dep vfio_pci 00:02:53.254 12:30:52 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:53.254 12:30:52 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:53.254 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:53.254 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:53.254 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:53.254 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:53.254 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:53.254 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:53.254 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:53.254 12:30:52 -- setup/driver.sh@30 -- # return 0 00:02:53.254 12:30:52 -- setup/driver.sh@37 -- # echo vfio-pci 00:02:53.254 12:30:52 -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:53.254 12:30:52 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:53.254 12:30:52 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:53.254 Looking for driver=vfio-pci 00:02:53.254 12:30:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.254 12:30:52 -- setup/driver.sh@45 -- # setup output config 00:02:53.254 12:30:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.254 12:30:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.663 12:30:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.663 12:30:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.663 12:30:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.579 12:30:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:56.579 12:30:55 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:56.579 12:30:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.579 12:30:55 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:56.579 12:30:55 -- setup/driver.sh@65 -- # setup reset 00:02:56.579 12:30:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.579 12:30:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.109 00:02:59.109 real 0m5.972s 00:02:59.109 user 0m1.172s 00:02:59.109 sys 0m2.013s 00:02:59.109 12:30:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.109 12:30:58 -- common/autotest_common.sh@10 -- # set +x 00:02:59.109 ************************************ 00:02:59.109 END TEST guess_driver 00:02:59.109 ************************************ 00:02:59.109 00:02:59.109 real 0m8.794s 00:02:59.109 user 0m1.846s 00:02:59.109 sys 0m3.166s 00:02:59.109 12:30:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.109 12:30:58 -- common/autotest_common.sh@10 -- # set +x 00:02:59.109 ************************************ 00:02:59.109 END TEST driver 00:02:59.109 ************************************ 00:02:59.109 12:30:58 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:59.109 12:30:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.109 12:30:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.109 12:30:58 -- common/autotest_common.sh@10 -- # set +x 00:02:59.109 ************************************ 00:02:59.109 START TEST devices 00:02:59.109 ************************************ 00:02:59.109 12:30:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:59.367 * Looking for test storage... 00:02:59.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.368 12:30:58 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:59.368 12:30:58 -- setup/devices.sh@192 -- # setup reset 00:02:59.368 12:30:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.368 12:30:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.270 12:30:59 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:01.270 12:30:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:01.270 12:30:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:01.270 12:30:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:01.271 12:30:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:01.271 12:30:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:01.271 12:30:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:01.271 12:30:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.271 12:30:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:01.271 12:30:59 -- setup/devices.sh@196 -- # blocks=() 00:03:01.271 12:30:59 -- setup/devices.sh@196 -- # declare -a blocks 00:03:01.271 12:30:59 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:01.271 12:30:59 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:01.271 12:30:59 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:01.271 12:30:59 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:01.271 12:30:59 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:01.271 12:30:59 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:01.271 12:30:59 -- setup/devices.sh@202 -- # pci=0000:81:00.0 00:03:01.271 12:30:59 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\1\:\0\0\.\0* ]] 00:03:01.271 12:30:59 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:01.271 12:30:59 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:01.271 12:30:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:01.271 No valid GPT data, bailing 00:03:01.271 12:30:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.271 12:30:59 -- scripts/common.sh@391 -- # pt= 00:03:01.271 12:30:59 -- scripts/common.sh@392 -- # return 1 00:03:01.271 12:30:59 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:01.271 12:30:59 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:01.271 12:30:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:01.271 12:30:59 -- setup/common.sh@80 -- # echo 2000398934016 00:03:01.271 12:30:59 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:01.271 12:30:59 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:01.271 12:30:59 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:81:00.0 00:03:01.271 12:30:59 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:01.271 12:30:59 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:01.271 12:30:59 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:01.271 12:30:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:01.271 12:30:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:01.271 12:30:59 -- common/autotest_common.sh@10 -- # set +x 00:03:01.271 ************************************ 00:03:01.271 START TEST nvme_mount 00:03:01.271 ************************************ 00:03:01.271 12:30:59 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:01.271 12:30:59 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:01.271 12:30:59 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:01.271 12:30:59 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.271 12:30:59 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:01.271 12:30:59 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:01.271 12:30:59 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:01.271 12:30:59 -- setup/common.sh@40 -- # local part_no=1 00:03:01.271 12:30:59 -- setup/common.sh@41 -- # local size=1073741824 00:03:01.271 12:30:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:01.271 12:30:59 -- setup/common.sh@44 -- # parts=() 00:03:01.271 12:30:59 -- setup/common.sh@44 -- # local parts 00:03:01.271 12:30:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:01.271 12:30:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:01.271 12:30:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:01.271 12:30:59 -- setup/common.sh@46 -- # (( part++ )) 00:03:01.271 12:30:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:01.271 12:30:59 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:01.271 12:30:59 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:01.271 12:30:59 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:02.208 Creating new GPT entries in memory. 00:03:02.208 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:02.208 other utilities. 00:03:02.208 12:31:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:02.208 12:31:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:02.208 12:31:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:02.208 12:31:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:02.208 12:31:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:03.146 Creating new GPT entries in memory. 00:03:03.146 The operation has completed successfully. 00:03:03.146 12:31:02 -- setup/common.sh@57 -- # (( part++ )) 00:03:03.146 12:31:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:03.146 12:31:02 -- setup/common.sh@62 -- # wait 1040190 00:03:03.146 12:31:02 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.146 12:31:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:03.146 12:31:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.146 12:31:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:03.146 12:31:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:03.146 12:31:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.146 12:31:02 -- setup/devices.sh@105 -- # verify 0000:81:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.146 12:31:02 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:03:03.146 12:31:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:03.146 12:31:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.146 12:31:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.146 12:31:02 -- setup/devices.sh@53 -- # local found=0 00:03:03.146 12:31:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:03.146 12:31:02 -- setup/devices.sh@56 -- # : 00:03:03.146 12:31:02 -- setup/devices.sh@59 -- # local pci status 00:03:03.146 12:31:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.146 12:31:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:03:03.146 12:31:02 -- setup/devices.sh@47 -- # setup output config 00:03:03.146 12:31:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.146 12:31:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:04.523 12:31:03 -- setup/devices.sh@63 -- # found=1 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.523 12:31:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.523 12:31:03 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:04.523 12:31:03 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.523 12:31:03 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:04.523 12:31:03 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.523 12:31:03 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:04.523 12:31:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.523 12:31:03 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.523 12:31:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:04.523 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:04.523 12:31:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:04.523 12:31:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:04.782 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:04.782 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:03:04.782 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:04.782 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:04.782 12:31:03 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:04.782 12:31:03 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:04.782 12:31:03 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.782 12:31:03 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:04.782 12:31:03 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:04.782 12:31:03 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:05.041 12:31:03 -- setup/devices.sh@116 -- # verify 0000:81:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:05.041 12:31:03 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:03:05.041 12:31:03 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:05.041 12:31:03 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:05.041 12:31:03 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:05.041 12:31:03 -- setup/devices.sh@53 -- # local found=0 00:03:05.041 12:31:03 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:05.041 12:31:03 -- setup/devices.sh@56 -- # : 00:03:05.041 12:31:03 -- setup/devices.sh@59 -- # local pci status 00:03:05.041 12:31:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.041 12:31:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:03:05.041 12:31:03 -- setup/devices.sh@47 -- # setup output config 00:03:05.041 12:31:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.041 12:31:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:06.445 12:31:05 -- setup/devices.sh@63 -- # found=1 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:06.445 12:31:05 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:06.445 12:31:05 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.445 12:31:05 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:06.445 12:31:05 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.445 12:31:05 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.445 12:31:05 -- setup/devices.sh@125 -- # verify 0000:81:00.0 data@nvme0n1 '' '' 00:03:06.445 12:31:05 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:03:06.445 12:31:05 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:06.445 12:31:05 -- setup/devices.sh@50 -- # local mount_point= 00:03:06.445 12:31:05 -- setup/devices.sh@51 -- # local test_file= 00:03:06.445 12:31:05 -- setup/devices.sh@53 -- # local found=0 00:03:06.445 12:31:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:06.445 12:31:05 -- setup/devices.sh@59 -- # local pci status 00:03:06.445 12:31:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.445 12:31:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:03:06.445 12:31:05 -- setup/devices.sh@47 -- # setup output config 00:03:06.445 12:31:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.445 12:31:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:07.822 12:31:06 -- setup/devices.sh@63 -- # found=1 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.822 12:31:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:07.822 12:31:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:07.822 12:31:06 -- setup/devices.sh@68 -- # return 0 00:03:07.822 12:31:06 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:07.822 12:31:06 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.822 12:31:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:07.822 12:31:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:07.822 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:07.822 00:03:07.822 real 0m6.846s 00:03:07.822 user 0m1.708s 00:03:07.822 sys 0m2.751s 00:03:07.822 12:31:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:07.822 12:31:06 -- common/autotest_common.sh@10 -- # set +x 00:03:07.822 ************************************ 00:03:07.822 END TEST nvme_mount 00:03:07.822 ************************************ 00:03:07.822 12:31:06 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:07.822 12:31:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:07.822 12:31:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:07.822 12:31:06 -- common/autotest_common.sh@10 -- # set +x 00:03:08.082 ************************************ 00:03:08.082 START TEST dm_mount 00:03:08.082 ************************************ 00:03:08.082 12:31:06 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:08.082 12:31:06 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:08.082 12:31:06 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:08.082 12:31:06 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:08.082 12:31:06 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:08.082 12:31:06 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:08.082 12:31:06 -- setup/common.sh@40 -- # local part_no=2 00:03:08.082 12:31:06 -- setup/common.sh@41 -- # local size=1073741824 00:03:08.082 12:31:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:08.082 12:31:06 -- setup/common.sh@44 -- # parts=() 00:03:08.082 12:31:06 -- setup/common.sh@44 -- # local parts 00:03:08.082 12:31:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:08.082 12:31:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:08.082 12:31:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:08.082 12:31:06 -- setup/common.sh@46 -- # (( part++ )) 00:03:08.082 12:31:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:08.082 12:31:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:08.082 12:31:06 -- setup/common.sh@46 -- # (( part++ )) 00:03:08.082 12:31:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:08.082 12:31:06 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:08.082 12:31:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:08.082 12:31:06 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:09.019 Creating new GPT entries in memory. 00:03:09.019 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:09.019 other utilities. 00:03:09.019 12:31:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:09.019 12:31:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.019 12:31:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:09.019 12:31:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:09.019 12:31:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:09.957 Creating new GPT entries in memory. 00:03:09.957 The operation has completed successfully. 00:03:09.957 12:31:08 -- setup/common.sh@57 -- # (( part++ )) 00:03:09.957 12:31:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.957 12:31:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:09.957 12:31:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:09.957 12:31:08 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:11.338 The operation has completed successfully. 00:03:11.338 12:31:10 -- setup/common.sh@57 -- # (( part++ )) 00:03:11.338 12:31:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:11.338 12:31:10 -- setup/common.sh@62 -- # wait 1042883 00:03:11.338 12:31:10 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:11.338 12:31:10 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.338 12:31:10 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:11.338 12:31:10 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:11.338 12:31:10 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:11.338 12:31:10 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:11.338 12:31:10 -- setup/devices.sh@161 -- # break 00:03:11.338 12:31:10 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:11.338 12:31:10 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:11.338 12:31:10 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:11.338 12:31:10 -- setup/devices.sh@166 -- # dm=dm-0 00:03:11.338 12:31:10 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:11.338 12:31:10 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:11.338 12:31:10 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.338 12:31:10 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:11.338 12:31:10 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.338 12:31:10 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:11.338 12:31:10 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:11.338 12:31:10 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.338 12:31:10 -- setup/devices.sh@174 -- # verify 0000:81:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:11.338 12:31:10 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:03:11.338 12:31:10 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:11.338 12:31:10 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.338 12:31:10 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:11.338 12:31:10 -- setup/devices.sh@53 -- # local found=0 00:03:11.338 12:31:10 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:11.338 12:31:10 -- setup/devices.sh@56 -- # : 00:03:11.338 12:31:10 -- setup/devices.sh@59 -- # local pci status 00:03:11.338 12:31:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.338 12:31:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:03:11.338 12:31:10 -- setup/devices.sh@47 -- # setup output config 00:03:11.338 12:31:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.338 12:31:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:12.712 12:31:11 -- setup/devices.sh@63 -- # found=1 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:12.712 12:31:11 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:12.712 12:31:11 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:12.712 12:31:11 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:12.712 12:31:11 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:12.712 12:31:11 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:12.712 12:31:11 -- setup/devices.sh@184 -- # verify 0000:81:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:12.712 12:31:11 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:03:12.712 12:31:11 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:12.712 12:31:11 -- setup/devices.sh@50 -- # local mount_point= 00:03:12.712 12:31:11 -- setup/devices.sh@51 -- # local test_file= 00:03:12.712 12:31:11 -- setup/devices.sh@53 -- # local found=0 00:03:12.712 12:31:11 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:12.712 12:31:11 -- setup/devices.sh@59 -- # local pci status 00:03:12.712 12:31:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.712 12:31:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:03:12.712 12:31:11 -- setup/devices.sh@47 -- # setup output config 00:03:12.712 12:31:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.712 12:31:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:14.086 12:31:12 -- setup/devices.sh@63 -- # found=1 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:03:14.086 12:31:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.086 12:31:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:14.086 12:31:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:14.086 12:31:13 -- setup/devices.sh@68 -- # return 0 00:03:14.086 12:31:13 -- setup/devices.sh@187 -- # cleanup_dm 00:03:14.086 12:31:13 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.086 12:31:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:14.086 12:31:13 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:14.086 12:31:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:14.086 12:31:13 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:14.086 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:14.086 12:31:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:14.086 12:31:13 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:14.086 00:03:14.086 real 0m6.176s 00:03:14.086 user 0m1.101s 00:03:14.086 sys 0m1.955s 00:03:14.086 12:31:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.086 12:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:14.086 ************************************ 00:03:14.086 END TEST dm_mount 00:03:14.086 ************************************ 00:03:14.345 12:31:13 -- setup/devices.sh@1 -- # cleanup 00:03:14.345 12:31:13 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:14.345 12:31:13 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.345 12:31:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:14.345 12:31:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:14.345 12:31:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:14.345 12:31:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:14.603 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:14.603 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:03:14.603 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:14.603 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:14.603 12:31:13 -- setup/devices.sh@12 -- # cleanup_dm 00:03:14.603 12:31:13 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.603 12:31:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:14.603 12:31:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:14.603 12:31:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:14.603 12:31:13 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:14.603 12:31:13 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:14.603 00:03:14.603 real 0m15.266s 00:03:14.603 user 0m3.566s 00:03:14.603 sys 0m5.931s 00:03:14.603 12:31:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.603 12:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:14.603 ************************************ 00:03:14.603 END TEST devices 00:03:14.603 ************************************ 00:03:14.603 00:03:14.603 real 0m51.531s 00:03:14.603 user 0m14.496s 00:03:14.603 sys 0m22.431s 00:03:14.603 12:31:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.603 12:31:13 -- common/autotest_common.sh@10 -- # set +x 00:03:14.603 ************************************ 00:03:14.603 END TEST setup.sh 00:03:14.603 ************************************ 00:03:14.603 12:31:13 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:16.006 Hugepages 00:03:16.006 node hugesize free / total 00:03:16.006 node0 1048576kB 0 / 0 00:03:16.006 node0 2048kB 2048 / 2048 00:03:16.006 node1 1048576kB 0 / 0 00:03:16.006 node1 2048kB 0 / 0 00:03:16.006 00:03:16.006 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:16.006 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:16.007 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:16.007 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:16.007 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:16.007 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:16.007 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:16.007 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:16.007 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:16.007 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:16.007 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:16.007 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:16.007 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:16.007 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:16.007 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:16.007 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:16.007 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:16.007 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:16.007 12:31:14 -- spdk/autotest.sh@130 -- # uname -s 00:03:16.007 12:31:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:16.007 12:31:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:16.007 12:31:14 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.389 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:17.389 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:17.389 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:17.389 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:17.389 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:17.389 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:17.389 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:17.389 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:17.389 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:17.389 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:17.389 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:17.389 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:17.389 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:17.389 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:17.389 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:17.389 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.304 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:03:19.304 12:31:18 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:20.239 12:31:19 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:20.239 12:31:19 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:20.239 12:31:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:20.239 12:31:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:20.239 12:31:19 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:20.239 12:31:19 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:20.239 12:31:19 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:20.239 12:31:19 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:20.239 12:31:19 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:20.497 12:31:19 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:20.497 12:31:19 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:81:00.0 00:03:20.497 12:31:19 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.432 Waiting for block devices as requested 00:03:21.690 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:03:21.690 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:21.690 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:21.949 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:21.949 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:21.949 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:22.208 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:22.208 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:22.208 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:22.208 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:22.466 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:22.466 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:22.466 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:22.466 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:22.725 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:22.725 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:22.725 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:22.983 12:31:21 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:22.983 12:31:21 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:81:00.0 00:03:22.983 12:31:21 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:22.983 12:31:21 -- common/autotest_common.sh@1488 -- # grep 0000:81:00.0/nvme/nvme 00:03:22.983 12:31:21 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:03:22.983 12:31:21 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 ]] 00:03:22.983 12:31:21 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:03:22.983 12:31:21 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:22.983 12:31:21 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:22.983 12:31:21 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:22.983 12:31:21 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:22.983 12:31:21 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:22.983 12:31:21 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:22.983 12:31:21 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:22.983 12:31:21 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:22.983 12:31:21 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:22.983 12:31:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:22.983 12:31:21 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:22.983 12:31:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:22.983 12:31:21 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:22.983 12:31:21 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:22.983 12:31:21 -- common/autotest_common.sh@1543 -- # continue 00:03:22.983 12:31:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:22.983 12:31:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:22.983 12:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:22.983 12:31:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:22.983 12:31:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:22.983 12:31:21 -- common/autotest_common.sh@10 -- # set +x 00:03:22.983 12:31:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.358 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:24.358 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:24.358 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:24.358 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:24.358 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:24.358 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:24.358 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:24.358 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:24.358 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:24.358 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:24.358 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:24.358 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:24.358 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:24.358 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:24.617 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:24.617 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:26.532 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.532 12:31:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:26.532 12:31:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:26.532 12:31:25 -- common/autotest_common.sh@10 -- # set +x 00:03:26.532 12:31:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:26.532 12:31:25 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:26.532 12:31:25 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:26.532 12:31:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:26.532 12:31:25 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:26.532 12:31:25 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:26.532 12:31:25 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:26.532 12:31:25 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:26.532 12:31:25 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:26.532 12:31:25 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:26.532 12:31:25 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:26.532 12:31:25 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:26.532 12:31:25 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:81:00.0 00:03:26.532 12:31:25 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:26.532 12:31:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:81:00.0/device 00:03:26.532 12:31:25 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:26.532 12:31:25 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:26.532 12:31:25 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:26.532 12:31:25 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:81:00.0 00:03:26.532 12:31:25 -- common/autotest_common.sh@1578 -- # [[ -z 0000:81:00.0 ]] 00:03:26.532 12:31:25 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1048795 00:03:26.533 12:31:25 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:26.533 12:31:25 -- common/autotest_common.sh@1584 -- # waitforlisten 1048795 00:03:26.533 12:31:25 -- common/autotest_common.sh@817 -- # '[' -z 1048795 ']' 00:03:26.533 12:31:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.533 12:31:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:26.533 12:31:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.533 12:31:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:26.533 12:31:25 -- common/autotest_common.sh@10 -- # set +x 00:03:26.533 [2024-04-16 12:31:25.433160] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:03:26.533 [2024-04-16 12:31:25.433258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048795 ] 00:03:26.533 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.533 [2024-04-16 12:31:25.500522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:26.791 [2024-04-16 12:31:25.608201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.050 12:31:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:27.050 12:31:25 -- common/autotest_common.sh@850 -- # return 0 00:03:27.050 12:31:25 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:27.050 12:31:25 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:27.050 12:31:25 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:81:00.0 00:03:30.340 nvme0n1 00:03:30.340 12:31:28 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:30.340 [2024-04-16 12:31:29.180143] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:30.340 request: 00:03:30.340 { 00:03:30.340 "nvme_ctrlr_name": "nvme0", 00:03:30.340 "password": "test", 00:03:30.340 "method": "bdev_nvme_opal_revert", 00:03:30.340 "req_id": 1 00:03:30.340 } 00:03:30.340 Got JSON-RPC error response 00:03:30.340 response: 00:03:30.340 { 00:03:30.340 "code": -32602, 00:03:30.340 "message": "Invalid parameters" 00:03:30.340 } 00:03:30.340 12:31:29 -- common/autotest_common.sh@1590 -- # true 00:03:30.340 12:31:29 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:30.340 12:31:29 -- common/autotest_common.sh@1594 -- # killprocess 1048795 00:03:30.340 12:31:29 -- common/autotest_common.sh@936 -- # '[' -z 1048795 ']' 00:03:30.340 12:31:29 -- common/autotest_common.sh@940 -- # kill -0 1048795 00:03:30.340 12:31:29 -- common/autotest_common.sh@941 -- # uname 00:03:30.340 12:31:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:30.340 12:31:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1048795 00:03:30.340 12:31:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:30.340 12:31:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:30.340 12:31:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1048795' 00:03:30.340 killing process with pid 1048795 00:03:30.340 12:31:29 -- common/autotest_common.sh@955 -- # kill 1048795 00:03:30.340 12:31:29 -- common/autotest_common.sh@960 -- # wait 1048795 00:03:32.868 12:31:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:32.868 12:31:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:32.868 12:31:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:32.868 12:31:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:32.868 12:31:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:32.868 12:31:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:32.868 12:31:31 -- common/autotest_common.sh@10 -- # set +x 00:03:33.127 12:31:31 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.127 12:31:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.127 12:31:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.127 12:31:31 -- common/autotest_common.sh@10 -- # set +x 00:03:33.127 ************************************ 00:03:33.127 START TEST env 00:03:33.127 ************************************ 00:03:33.127 12:31:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.127 * Looking for test storage... 00:03:33.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:33.127 12:31:32 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.127 12:31:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.127 12:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.127 12:31:32 -- common/autotest_common.sh@10 -- # set +x 00:03:33.127 ************************************ 00:03:33.127 START TEST env_memory 00:03:33.127 ************************************ 00:03:33.127 12:31:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.386 00:03:33.386 00:03:33.386 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.386 http://cunit.sourceforge.net/ 00:03:33.386 00:03:33.386 00:03:33.386 Suite: memory 00:03:33.386 Test: alloc and free memory map ...[2024-04-16 12:31:32.223472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:33.386 passed 00:03:33.386 Test: mem map translation ...[2024-04-16 12:31:32.244648] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:33.386 [2024-04-16 12:31:32.244670] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:33.386 [2024-04-16 12:31:32.244729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:33.386 [2024-04-16 12:31:32.244741] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:33.386 passed 00:03:33.386 Test: mem map registration ...[2024-04-16 12:31:32.286352] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:33.386 [2024-04-16 12:31:32.286371] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:33.386 passed 00:03:33.386 Test: mem map adjacent registrations ...passed 00:03:33.386 00:03:33.386 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.386 suites 1 1 n/a 0 0 00:03:33.386 tests 4 4 4 0 0 00:03:33.386 asserts 152 152 152 0 n/a 00:03:33.386 00:03:33.386 Elapsed time = 0.140 seconds 00:03:33.386 00:03:33.386 real 0m0.147s 00:03:33.386 user 0m0.139s 00:03:33.386 sys 0m0.007s 00:03:33.386 12:31:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:33.386 12:31:32 -- common/autotest_common.sh@10 -- # set +x 00:03:33.386 ************************************ 00:03:33.386 END TEST env_memory 00:03:33.386 ************************************ 00:03:33.386 12:31:32 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.386 12:31:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.386 12:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.386 12:31:32 -- common/autotest_common.sh@10 -- # set +x 00:03:33.645 ************************************ 00:03:33.645 START TEST env_vtophys 00:03:33.645 ************************************ 00:03:33.645 12:31:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.645 EAL: lib.eal log level changed from notice to debug 00:03:33.645 EAL: Detected lcore 0 as core 0 on socket 0 00:03:33.645 EAL: Detected lcore 1 as core 1 on socket 0 00:03:33.645 EAL: Detected lcore 2 as core 2 on socket 0 00:03:33.645 EAL: Detected lcore 3 as core 3 on socket 0 00:03:33.645 EAL: Detected lcore 4 as core 4 on socket 0 00:03:33.645 EAL: Detected lcore 5 as core 5 on socket 0 00:03:33.645 EAL: Detected lcore 6 as core 8 on socket 0 00:03:33.645 EAL: Detected lcore 7 as core 9 on socket 0 00:03:33.645 EAL: Detected lcore 8 as core 10 on socket 0 00:03:33.645 EAL: Detected lcore 9 as core 11 on socket 0 00:03:33.645 EAL: Detected lcore 10 as core 12 on socket 0 00:03:33.645 EAL: Detected lcore 11 as core 13 on socket 0 00:03:33.645 EAL: Detected lcore 12 as core 0 on socket 1 00:03:33.645 EAL: Detected lcore 13 as core 1 on socket 1 00:03:33.645 EAL: Detected lcore 14 as core 2 on socket 1 00:03:33.645 EAL: Detected lcore 15 as core 3 on socket 1 00:03:33.646 EAL: Detected lcore 16 as core 4 on socket 1 00:03:33.646 EAL: Detected lcore 17 as core 5 on socket 1 00:03:33.646 EAL: Detected lcore 18 as core 8 on socket 1 00:03:33.646 EAL: Detected lcore 19 as core 9 on socket 1 00:03:33.646 EAL: Detected lcore 20 as core 10 on socket 1 00:03:33.646 EAL: Detected lcore 21 as core 11 on socket 1 00:03:33.646 EAL: Detected lcore 22 as core 12 on socket 1 00:03:33.646 EAL: Detected lcore 23 as core 13 on socket 1 00:03:33.646 EAL: Detected lcore 24 as core 0 on socket 0 00:03:33.646 EAL: Detected lcore 25 as core 1 on socket 0 00:03:33.646 EAL: Detected lcore 26 as core 2 on socket 0 00:03:33.646 EAL: Detected lcore 27 as core 3 on socket 0 00:03:33.646 EAL: Detected lcore 28 as core 4 on socket 0 00:03:33.646 EAL: Detected lcore 29 as core 5 on socket 0 00:03:33.646 EAL: Detected lcore 30 as core 8 on socket 0 00:03:33.646 EAL: Detected lcore 31 as core 9 on socket 0 00:03:33.646 EAL: Detected lcore 32 as core 10 on socket 0 00:03:33.646 EAL: Detected lcore 33 as core 11 on socket 0 00:03:33.646 EAL: Detected lcore 34 as core 12 on socket 0 00:03:33.646 EAL: Detected lcore 35 as core 13 on socket 0 00:03:33.646 EAL: Detected lcore 36 as core 0 on socket 1 00:03:33.646 EAL: Detected lcore 37 as core 1 on socket 1 00:03:33.646 EAL: Detected lcore 38 as core 2 on socket 1 00:03:33.646 EAL: Detected lcore 39 as core 3 on socket 1 00:03:33.646 EAL: Detected lcore 40 as core 4 on socket 1 00:03:33.646 EAL: Detected lcore 41 as core 5 on socket 1 00:03:33.646 EAL: Detected lcore 42 as core 8 on socket 1 00:03:33.646 EAL: Detected lcore 43 as core 9 on socket 1 00:03:33.646 EAL: Detected lcore 44 as core 10 on socket 1 00:03:33.646 EAL: Detected lcore 45 as core 11 on socket 1 00:03:33.646 EAL: Detected lcore 46 as core 12 on socket 1 00:03:33.646 EAL: Detected lcore 47 as core 13 on socket 1 00:03:33.646 EAL: Maximum logical cores by configuration: 128 00:03:33.646 EAL: Detected CPU lcores: 48 00:03:33.646 EAL: Detected NUMA nodes: 2 00:03:33.646 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:33.646 EAL: Detected shared linkage of DPDK 00:03:33.646 EAL: No shared files mode enabled, IPC will be disabled 00:03:33.646 EAL: Bus pci wants IOVA as 'DC' 00:03:33.646 EAL: Buses did not request a specific IOVA mode. 00:03:33.646 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:33.646 EAL: Selected IOVA mode 'VA' 00:03:33.646 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.646 EAL: Probing VFIO support... 00:03:33.646 EAL: IOMMU type 1 (Type 1) is supported 00:03:33.646 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:33.646 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:33.646 EAL: VFIO support initialized 00:03:33.646 EAL: Ask a virtual area of 0x2e000 bytes 00:03:33.646 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:33.646 EAL: Setting up physically contiguous memory... 00:03:33.646 EAL: Setting maximum number of open files to 524288 00:03:33.646 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:33.646 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:33.646 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:33.646 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.646 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:33.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.646 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.646 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:33.646 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:33.646 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.646 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:33.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.646 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.646 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:33.646 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:33.646 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.646 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:33.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.646 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.646 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:33.646 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:33.646 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.646 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:33.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.646 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.646 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:33.646 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:33.646 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:33.646 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.646 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:33.646 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.646 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.646 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:33.646 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:33.646 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.646 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:33.646 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.646 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.646 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:33.646 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:33.646 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.646 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:33.646 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.646 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.646 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:33.646 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:33.646 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.646 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:33.646 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.646 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.646 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:33.646 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:33.646 EAL: Hugepages will be freed exactly as allocated. 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: TSC frequency is ~2700000 KHz 00:03:33.646 EAL: Main lcore 0 is ready (tid=7fcf0c699a00;cpuset=[0]) 00:03:33.646 EAL: Trying to obtain current memory policy. 00:03:33.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.646 EAL: Restoring previous memory policy: 0 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was expanded by 2MB 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:33.646 EAL: Mem event callback 'spdk:(nil)' registered 00:03:33.646 00:03:33.646 00:03:33.646 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.646 http://cunit.sourceforge.net/ 00:03:33.646 00:03:33.646 00:03:33.646 Suite: components_suite 00:03:33.646 Test: vtophys_malloc_test ...passed 00:03:33.646 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:33.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.646 EAL: Restoring previous memory policy: 4 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was expanded by 4MB 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was shrunk by 4MB 00:03:33.646 EAL: Trying to obtain current memory policy. 00:03:33.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.646 EAL: Restoring previous memory policy: 4 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was expanded by 6MB 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was shrunk by 6MB 00:03:33.646 EAL: Trying to obtain current memory policy. 00:03:33.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.646 EAL: Restoring previous memory policy: 4 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was expanded by 10MB 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was shrunk by 10MB 00:03:33.646 EAL: Trying to obtain current memory policy. 00:03:33.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.646 EAL: Restoring previous memory policy: 4 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was expanded by 18MB 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was shrunk by 18MB 00:03:33.646 EAL: Trying to obtain current memory policy. 00:03:33.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.646 EAL: Restoring previous memory policy: 4 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was expanded by 34MB 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was shrunk by 34MB 00:03:33.646 EAL: Trying to obtain current memory policy. 00:03:33.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.646 EAL: Restoring previous memory policy: 4 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was expanded by 66MB 00:03:33.646 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.646 EAL: request: mp_malloc_sync 00:03:33.646 EAL: No shared files mode enabled, IPC is disabled 00:03:33.646 EAL: Heap on socket 0 was shrunk by 66MB 00:03:33.646 EAL: Trying to obtain current memory policy. 00:03:33.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.647 EAL: Restoring previous memory policy: 4 00:03:33.647 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.647 EAL: request: mp_malloc_sync 00:03:33.647 EAL: No shared files mode enabled, IPC is disabled 00:03:33.647 EAL: Heap on socket 0 was expanded by 130MB 00:03:33.647 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.647 EAL: request: mp_malloc_sync 00:03:33.647 EAL: No shared files mode enabled, IPC is disabled 00:03:33.647 EAL: Heap on socket 0 was shrunk by 130MB 00:03:33.647 EAL: Trying to obtain current memory policy. 00:03:33.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.905 EAL: Restoring previous memory policy: 4 00:03:33.905 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.905 EAL: request: mp_malloc_sync 00:03:33.905 EAL: No shared files mode enabled, IPC is disabled 00:03:33.905 EAL: Heap on socket 0 was expanded by 258MB 00:03:33.905 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.905 EAL: request: mp_malloc_sync 00:03:33.905 EAL: No shared files mode enabled, IPC is disabled 00:03:33.905 EAL: Heap on socket 0 was shrunk by 258MB 00:03:33.905 EAL: Trying to obtain current memory policy. 00:03:33.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.163 EAL: Restoring previous memory policy: 4 00:03:34.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.163 EAL: request: mp_malloc_sync 00:03:34.163 EAL: No shared files mode enabled, IPC is disabled 00:03:34.163 EAL: Heap on socket 0 was expanded by 514MB 00:03:34.163 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.421 EAL: request: mp_malloc_sync 00:03:34.421 EAL: No shared files mode enabled, IPC is disabled 00:03:34.421 EAL: Heap on socket 0 was shrunk by 514MB 00:03:34.421 EAL: Trying to obtain current memory policy. 00:03:34.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.679 EAL: Restoring previous memory policy: 4 00:03:34.679 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.679 EAL: request: mp_malloc_sync 00:03:34.679 EAL: No shared files mode enabled, IPC is disabled 00:03:34.679 EAL: Heap on socket 0 was expanded by 1026MB 00:03:34.938 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.938 EAL: request: mp_malloc_sync 00:03:34.938 EAL: No shared files mode enabled, IPC is disabled 00:03:34.938 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:34.938 passed 00:03:34.938 00:03:34.938 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.938 suites 1 1 n/a 0 0 00:03:34.938 tests 2 2 2 0 0 00:03:34.938 asserts 497 497 497 0 n/a 00:03:34.938 00:03:34.938 Elapsed time = 1.387 seconds 00:03:34.938 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.938 EAL: request: mp_malloc_sync 00:03:34.938 EAL: No shared files mode enabled, IPC is disabled 00:03:34.938 EAL: Heap on socket 0 was shrunk by 2MB 00:03:34.938 EAL: No shared files mode enabled, IPC is disabled 00:03:34.938 EAL: No shared files mode enabled, IPC is disabled 00:03:34.938 EAL: No shared files mode enabled, IPC is disabled 00:03:34.938 00:03:34.938 real 0m1.519s 00:03:34.938 user 0m0.857s 00:03:34.938 sys 0m0.626s 00:03:34.938 12:31:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.938 12:31:33 -- common/autotest_common.sh@10 -- # set +x 00:03:34.938 ************************************ 00:03:34.938 END TEST env_vtophys 00:03:34.938 ************************************ 00:03:35.197 12:31:34 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.197 12:31:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.197 12:31:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.197 12:31:34 -- common/autotest_common.sh@10 -- # set +x 00:03:35.197 ************************************ 00:03:35.197 START TEST env_pci 00:03:35.197 ************************************ 00:03:35.198 12:31:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.198 00:03:35.198 00:03:35.198 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.198 http://cunit.sourceforge.net/ 00:03:35.198 00:03:35.198 00:03:35.198 Suite: pci 00:03:35.198 Test: pci_hook ...[2024-04-16 12:31:34.120779] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1049861 has claimed it 00:03:35.198 EAL: Cannot find device (10000:00:01.0) 00:03:35.198 EAL: Failed to attach device on primary process 00:03:35.198 passed 00:03:35.198 00:03:35.198 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.198 suites 1 1 n/a 0 0 00:03:35.198 tests 1 1 1 0 0 00:03:35.198 asserts 25 25 25 0 n/a 00:03:35.198 00:03:35.198 Elapsed time = 0.026 seconds 00:03:35.198 00:03:35.198 real 0m0.039s 00:03:35.198 user 0m0.010s 00:03:35.198 sys 0m0.029s 00:03:35.198 12:31:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.198 12:31:34 -- common/autotest_common.sh@10 -- # set +x 00:03:35.198 ************************************ 00:03:35.198 END TEST env_pci 00:03:35.198 ************************************ 00:03:35.198 12:31:34 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:35.198 12:31:34 -- env/env.sh@15 -- # uname 00:03:35.198 12:31:34 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:35.198 12:31:34 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:35.198 12:31:34 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.198 12:31:34 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:35.198 12:31:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.198 12:31:34 -- common/autotest_common.sh@10 -- # set +x 00:03:35.457 ************************************ 00:03:35.457 START TEST env_dpdk_post_init 00:03:35.457 ************************************ 00:03:35.457 12:31:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.457 EAL: Detected CPU lcores: 48 00:03:35.457 EAL: Detected NUMA nodes: 2 00:03:35.457 EAL: Detected shared linkage of DPDK 00:03:35.457 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:35.457 EAL: Selected IOVA mode 'VA' 00:03:35.457 EAL: No free 2048 kB hugepages reported on node 1 00:03:35.457 EAL: VFIO support initialized 00:03:35.457 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.457 EAL: Using IOMMU type 1 (Type 1) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:35.457 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:35.716 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:35.716 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:35.716 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:35.716 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:35.716 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:35.716 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:36.284 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:81:00.0 (socket 1) 00:03:40.508 EAL: Releasing PCI mapped resource for 0000:81:00.0 00:03:40.508 EAL: Calling pci_unmap_resource for 0000:81:00.0 at 0x202001040000 00:03:40.508 Starting DPDK initialization... 00:03:40.508 Starting SPDK post initialization... 00:03:40.508 SPDK NVMe probe 00:03:40.508 Attaching to 0000:81:00.0 00:03:40.508 Attached to 0000:81:00.0 00:03:40.508 Cleaning up... 00:03:40.508 00:03:40.508 real 0m5.211s 00:03:40.508 user 0m3.953s 00:03:40.508 sys 0m0.312s 00:03:40.508 12:31:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.508 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.508 ************************************ 00:03:40.508 END TEST env_dpdk_post_init 00:03:40.509 ************************************ 00:03:40.509 12:31:39 -- env/env.sh@26 -- # uname 00:03:40.509 12:31:39 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:40.509 12:31:39 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.509 12:31:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.509 12:31:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.509 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.767 ************************************ 00:03:40.767 START TEST env_mem_callbacks 00:03:40.767 ************************************ 00:03:40.767 12:31:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.767 EAL: Detected CPU lcores: 48 00:03:40.767 EAL: Detected NUMA nodes: 2 00:03:40.767 EAL: Detected shared linkage of DPDK 00:03:40.767 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:40.767 EAL: Selected IOVA mode 'VA' 00:03:40.767 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.767 EAL: VFIO support initialized 00:03:40.767 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:40.767 00:03:40.767 00:03:40.767 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.767 http://cunit.sourceforge.net/ 00:03:40.767 00:03:40.767 00:03:40.767 Suite: memory 00:03:40.767 Test: test ... 00:03:40.767 register 0x200000200000 2097152 00:03:40.767 malloc 3145728 00:03:40.767 register 0x200000400000 4194304 00:03:40.767 buf 0x200000500000 len 3145728 PASSED 00:03:40.767 malloc 64 00:03:40.767 buf 0x2000004fff40 len 64 PASSED 00:03:40.767 malloc 4194304 00:03:40.767 register 0x200000800000 6291456 00:03:40.767 buf 0x200000a00000 len 4194304 PASSED 00:03:40.767 free 0x200000500000 3145728 00:03:40.767 free 0x2000004fff40 64 00:03:40.767 unregister 0x200000400000 4194304 PASSED 00:03:40.767 free 0x200000a00000 4194304 00:03:40.767 unregister 0x200000800000 6291456 PASSED 00:03:40.767 malloc 8388608 00:03:40.767 register 0x200000400000 10485760 00:03:40.767 buf 0x200000600000 len 8388608 PASSED 00:03:40.767 free 0x200000600000 8388608 00:03:40.767 unregister 0x200000400000 10485760 PASSED 00:03:40.767 passed 00:03:40.767 00:03:40.767 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.767 suites 1 1 n/a 0 0 00:03:40.767 tests 1 1 1 0 0 00:03:40.767 asserts 15 15 15 0 n/a 00:03:40.767 00:03:40.767 Elapsed time = 0.005 seconds 00:03:40.767 00:03:40.767 real 0m0.052s 00:03:40.767 user 0m0.015s 00:03:40.767 sys 0m0.037s 00:03:40.767 12:31:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.767 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.767 ************************************ 00:03:40.767 END TEST env_mem_callbacks 00:03:40.767 ************************************ 00:03:40.767 00:03:40.767 real 0m7.642s 00:03:40.767 user 0m5.216s 00:03:40.767 sys 0m1.404s 00:03:40.767 12:31:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.767 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.767 ************************************ 00:03:40.767 END TEST env 00:03:40.767 ************************************ 00:03:40.767 12:31:39 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:40.767 12:31:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.767 12:31:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.767 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.767 ************************************ 00:03:40.767 START TEST rpc 00:03:40.767 ************************************ 00:03:40.767 12:31:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.025 * Looking for test storage... 00:03:41.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.025 12:31:39 -- rpc/rpc.sh@65 -- # spdk_pid=1050766 00:03:41.025 12:31:39 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:41.025 12:31:39 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.025 12:31:39 -- rpc/rpc.sh@67 -- # waitforlisten 1050766 00:03:41.025 12:31:39 -- common/autotest_common.sh@817 -- # '[' -z 1050766 ']' 00:03:41.025 12:31:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.025 12:31:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:41.025 12:31:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.025 12:31:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:41.025 12:31:39 -- common/autotest_common.sh@10 -- # set +x 00:03:41.025 [2024-04-16 12:31:39.904708] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:03:41.025 [2024-04-16 12:31:39.904803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050766 ] 00:03:41.025 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.025 [2024-04-16 12:31:39.972796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.025 [2024-04-16 12:31:40.088595] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:41.025 [2024-04-16 12:31:40.088666] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1050766' to capture a snapshot of events at runtime. 00:03:41.025 [2024-04-16 12:31:40.088680] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:41.025 [2024-04-16 12:31:40.088692] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:41.025 [2024-04-16 12:31:40.088702] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1050766 for offline analysis/debug. 00:03:41.025 [2024-04-16 12:31:40.088749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.593 12:31:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:41.593 12:31:40 -- common/autotest_common.sh@850 -- # return 0 00:03:41.593 12:31:40 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.593 12:31:40 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.593 12:31:40 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:41.593 12:31:40 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:41.593 12:31:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.593 12:31:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.593 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.593 ************************************ 00:03:41.593 START TEST rpc_integrity 00:03:41.593 ************************************ 00:03:41.593 12:31:40 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:41.593 12:31:40 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.593 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.593 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.593 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.593 12:31:40 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.593 12:31:40 -- rpc/rpc.sh@13 -- # jq length 00:03:41.593 12:31:40 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.593 12:31:40 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.593 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.593 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.593 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.593 12:31:40 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:41.593 12:31:40 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.593 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.593 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.593 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.593 12:31:40 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.593 { 00:03:41.593 "name": "Malloc0", 00:03:41.593 "aliases": [ 00:03:41.593 "58175b06-0def-4340-b87c-69fbbcebb4b8" 00:03:41.593 ], 00:03:41.593 "product_name": "Malloc disk", 00:03:41.593 "block_size": 512, 00:03:41.593 "num_blocks": 16384, 00:03:41.593 "uuid": "58175b06-0def-4340-b87c-69fbbcebb4b8", 00:03:41.593 "assigned_rate_limits": { 00:03:41.593 "rw_ios_per_sec": 0, 00:03:41.593 "rw_mbytes_per_sec": 0, 00:03:41.593 "r_mbytes_per_sec": 0, 00:03:41.593 "w_mbytes_per_sec": 0 00:03:41.593 }, 00:03:41.593 "claimed": false, 00:03:41.593 "zoned": false, 00:03:41.593 "supported_io_types": { 00:03:41.593 "read": true, 00:03:41.593 "write": true, 00:03:41.593 "unmap": true, 00:03:41.593 "write_zeroes": true, 00:03:41.593 "flush": true, 00:03:41.593 "reset": true, 00:03:41.593 "compare": false, 00:03:41.593 "compare_and_write": false, 00:03:41.593 "abort": true, 00:03:41.593 "nvme_admin": false, 00:03:41.593 "nvme_io": false 00:03:41.593 }, 00:03:41.593 "memory_domains": [ 00:03:41.593 { 00:03:41.593 "dma_device_id": "system", 00:03:41.593 "dma_device_type": 1 00:03:41.593 }, 00:03:41.593 { 00:03:41.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.593 "dma_device_type": 2 00:03:41.593 } 00:03:41.593 ], 00:03:41.593 "driver_specific": {} 00:03:41.593 } 00:03:41.593 ]' 00:03:41.593 12:31:40 -- rpc/rpc.sh@17 -- # jq length 00:03:41.593 12:31:40 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.593 12:31:40 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:41.593 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.593 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.593 [2024-04-16 12:31:40.566476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:41.593 [2024-04-16 12:31:40.566528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.593 [2024-04-16 12:31:40.566552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8bd20 00:03:41.593 [2024-04-16 12:31:40.566577] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.593 [2024-04-16 12:31:40.568132] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.593 [2024-04-16 12:31:40.568162] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.593 Passthru0 00:03:41.593 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.593 12:31:40 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.593 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.593 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.593 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.593 12:31:40 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.593 { 00:03:41.593 "name": "Malloc0", 00:03:41.593 "aliases": [ 00:03:41.593 "58175b06-0def-4340-b87c-69fbbcebb4b8" 00:03:41.593 ], 00:03:41.593 "product_name": "Malloc disk", 00:03:41.593 "block_size": 512, 00:03:41.593 "num_blocks": 16384, 00:03:41.593 "uuid": "58175b06-0def-4340-b87c-69fbbcebb4b8", 00:03:41.593 "assigned_rate_limits": { 00:03:41.593 "rw_ios_per_sec": 0, 00:03:41.593 "rw_mbytes_per_sec": 0, 00:03:41.593 "r_mbytes_per_sec": 0, 00:03:41.593 "w_mbytes_per_sec": 0 00:03:41.593 }, 00:03:41.593 "claimed": true, 00:03:41.593 "claim_type": "exclusive_write", 00:03:41.593 "zoned": false, 00:03:41.593 "supported_io_types": { 00:03:41.593 "read": true, 00:03:41.593 "write": true, 00:03:41.593 "unmap": true, 00:03:41.593 "write_zeroes": true, 00:03:41.593 "flush": true, 00:03:41.593 "reset": true, 00:03:41.593 "compare": false, 00:03:41.593 "compare_and_write": false, 00:03:41.593 "abort": true, 00:03:41.593 "nvme_admin": false, 00:03:41.593 "nvme_io": false 00:03:41.593 }, 00:03:41.593 "memory_domains": [ 00:03:41.593 { 00:03:41.593 "dma_device_id": "system", 00:03:41.593 "dma_device_type": 1 00:03:41.593 }, 00:03:41.593 { 00:03:41.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.593 "dma_device_type": 2 00:03:41.593 } 00:03:41.593 ], 00:03:41.593 "driver_specific": {} 00:03:41.593 }, 00:03:41.593 { 00:03:41.593 "name": "Passthru0", 00:03:41.593 "aliases": [ 00:03:41.593 "b2ad0bcc-d1ef-5c18-bf14-582a8e592e93" 00:03:41.593 ], 00:03:41.593 "product_name": "passthru", 00:03:41.593 "block_size": 512, 00:03:41.593 "num_blocks": 16384, 00:03:41.593 "uuid": "b2ad0bcc-d1ef-5c18-bf14-582a8e592e93", 00:03:41.593 "assigned_rate_limits": { 00:03:41.593 "rw_ios_per_sec": 0, 00:03:41.593 "rw_mbytes_per_sec": 0, 00:03:41.593 "r_mbytes_per_sec": 0, 00:03:41.593 "w_mbytes_per_sec": 0 00:03:41.593 }, 00:03:41.593 "claimed": false, 00:03:41.593 "zoned": false, 00:03:41.594 "supported_io_types": { 00:03:41.594 "read": true, 00:03:41.594 "write": true, 00:03:41.594 "unmap": true, 00:03:41.594 "write_zeroes": true, 00:03:41.594 "flush": true, 00:03:41.594 "reset": true, 00:03:41.594 "compare": false, 00:03:41.594 "compare_and_write": false, 00:03:41.594 "abort": true, 00:03:41.594 "nvme_admin": false, 00:03:41.594 "nvme_io": false 00:03:41.594 }, 00:03:41.594 "memory_domains": [ 00:03:41.594 { 00:03:41.594 "dma_device_id": "system", 00:03:41.594 "dma_device_type": 1 00:03:41.594 }, 00:03:41.594 { 00:03:41.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.594 "dma_device_type": 2 00:03:41.594 } 00:03:41.594 ], 00:03:41.594 "driver_specific": { 00:03:41.594 "passthru": { 00:03:41.594 "name": "Passthru0", 00:03:41.594 "base_bdev_name": "Malloc0" 00:03:41.594 } 00:03:41.594 } 00:03:41.594 } 00:03:41.594 ]' 00:03:41.594 12:31:40 -- rpc/rpc.sh@21 -- # jq length 00:03:41.594 12:31:40 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.594 12:31:40 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.594 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.594 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.594 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.594 12:31:40 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:41.594 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.594 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.594 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.594 12:31:40 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.594 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.594 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.594 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.594 12:31:40 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.594 12:31:40 -- rpc/rpc.sh@26 -- # jq length 00:03:41.852 12:31:40 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:41.852 00:03:41.852 real 0m0.231s 00:03:41.852 user 0m0.153s 00:03:41.852 sys 0m0.019s 00:03:41.852 12:31:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.852 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.852 ************************************ 00:03:41.852 END TEST rpc_integrity 00:03:41.852 ************************************ 00:03:41.852 12:31:40 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:41.852 12:31:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.852 12:31:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.852 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.852 ************************************ 00:03:41.852 START TEST rpc_plugins 00:03:41.852 ************************************ 00:03:41.852 12:31:40 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:03:41.852 12:31:40 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:41.852 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.852 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.852 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.852 12:31:40 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:41.852 12:31:40 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:41.852 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.852 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.852 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.852 12:31:40 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:41.852 { 00:03:41.852 "name": "Malloc1", 00:03:41.852 "aliases": [ 00:03:41.852 "24b79e90-35f7-4752-a44b-f2cc6d48206b" 00:03:41.852 ], 00:03:41.852 "product_name": "Malloc disk", 00:03:41.852 "block_size": 4096, 00:03:41.852 "num_blocks": 256, 00:03:41.852 "uuid": "24b79e90-35f7-4752-a44b-f2cc6d48206b", 00:03:41.852 "assigned_rate_limits": { 00:03:41.852 "rw_ios_per_sec": 0, 00:03:41.852 "rw_mbytes_per_sec": 0, 00:03:41.852 "r_mbytes_per_sec": 0, 00:03:41.852 "w_mbytes_per_sec": 0 00:03:41.852 }, 00:03:41.852 "claimed": false, 00:03:41.852 "zoned": false, 00:03:41.852 "supported_io_types": { 00:03:41.852 "read": true, 00:03:41.852 "write": true, 00:03:41.852 "unmap": true, 00:03:41.852 "write_zeroes": true, 00:03:41.852 "flush": true, 00:03:41.852 "reset": true, 00:03:41.852 "compare": false, 00:03:41.852 "compare_and_write": false, 00:03:41.852 "abort": true, 00:03:41.852 "nvme_admin": false, 00:03:41.852 "nvme_io": false 00:03:41.852 }, 00:03:41.852 "memory_domains": [ 00:03:41.852 { 00:03:41.852 "dma_device_id": "system", 00:03:41.852 "dma_device_type": 1 00:03:41.852 }, 00:03:41.852 { 00:03:41.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.852 "dma_device_type": 2 00:03:41.852 } 00:03:41.852 ], 00:03:41.852 "driver_specific": {} 00:03:41.852 } 00:03:41.852 ]' 00:03:41.852 12:31:40 -- rpc/rpc.sh@32 -- # jq length 00:03:41.852 12:31:40 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:41.852 12:31:40 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:41.852 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.852 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.852 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.852 12:31:40 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:41.852 12:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:41.852 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.852 12:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:41.852 12:31:40 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:41.852 12:31:40 -- rpc/rpc.sh@36 -- # jq length 00:03:41.852 12:31:40 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:41.852 00:03:41.852 real 0m0.115s 00:03:41.852 user 0m0.074s 00:03:41.852 sys 0m0.011s 00:03:41.852 12:31:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.852 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:41.852 ************************************ 00:03:41.852 END TEST rpc_plugins 00:03:41.852 ************************************ 00:03:42.110 12:31:40 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:42.110 12:31:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.110 12:31:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.110 12:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:42.110 ************************************ 00:03:42.110 START TEST rpc_trace_cmd_test 00:03:42.110 ************************************ 00:03:42.110 12:31:41 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:03:42.110 12:31:41 -- rpc/rpc.sh@40 -- # local info 00:03:42.110 12:31:41 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:42.110 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.110 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.110 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.110 12:31:41 -- rpc/rpc.sh@42 -- # info='{ 00:03:42.110 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1050766", 00:03:42.110 "tpoint_group_mask": "0x8", 00:03:42.110 "iscsi_conn": { 00:03:42.110 "mask": "0x2", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "scsi": { 00:03:42.110 "mask": "0x4", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "bdev": { 00:03:42.110 "mask": "0x8", 00:03:42.110 "tpoint_mask": "0xffffffffffffffff" 00:03:42.110 }, 00:03:42.110 "nvmf_rdma": { 00:03:42.110 "mask": "0x10", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "nvmf_tcp": { 00:03:42.110 "mask": "0x20", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "ftl": { 00:03:42.110 "mask": "0x40", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "blobfs": { 00:03:42.110 "mask": "0x80", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "dsa": { 00:03:42.110 "mask": "0x200", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "thread": { 00:03:42.110 "mask": "0x400", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "nvme_pcie": { 00:03:42.110 "mask": "0x800", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "iaa": { 00:03:42.110 "mask": "0x1000", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.110 }, 00:03:42.110 "nvme_tcp": { 00:03:42.110 "mask": "0x2000", 00:03:42.110 "tpoint_mask": "0x0" 00:03:42.111 }, 00:03:42.111 "bdev_nvme": { 00:03:42.111 "mask": "0x4000", 00:03:42.111 "tpoint_mask": "0x0" 00:03:42.111 }, 00:03:42.111 "sock": { 00:03:42.111 "mask": "0x8000", 00:03:42.111 "tpoint_mask": "0x0" 00:03:42.111 } 00:03:42.111 }' 00:03:42.111 12:31:41 -- rpc/rpc.sh@43 -- # jq length 00:03:42.111 12:31:41 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:42.111 12:31:41 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:42.111 12:31:41 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:42.111 12:31:41 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:42.111 12:31:41 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:42.111 12:31:41 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:42.369 12:31:41 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:42.369 12:31:41 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:42.369 12:31:41 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:42.369 00:03:42.369 real 0m0.202s 00:03:42.369 user 0m0.172s 00:03:42.369 sys 0m0.019s 00:03:42.369 12:31:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:42.369 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.369 ************************************ 00:03:42.369 END TEST rpc_trace_cmd_test 00:03:42.369 ************************************ 00:03:42.369 12:31:41 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:42.369 12:31:41 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:42.369 12:31:41 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:42.369 12:31:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.369 12:31:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.369 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.369 ************************************ 00:03:42.369 START TEST rpc_daemon_integrity 00:03:42.369 ************************************ 00:03:42.369 12:31:41 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:42.369 12:31:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.369 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.369 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.369 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.369 12:31:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.369 12:31:41 -- rpc/rpc.sh@13 -- # jq length 00:03:42.369 12:31:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:42.369 12:31:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:42.369 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.369 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.369 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.369 12:31:41 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:42.369 12:31:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:42.369 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.369 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.369 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.369 12:31:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:42.369 { 00:03:42.369 "name": "Malloc2", 00:03:42.369 "aliases": [ 00:03:42.369 "ac5fb50b-9ab0-4c98-b23d-c1f1398a1b5d" 00:03:42.369 ], 00:03:42.369 "product_name": "Malloc disk", 00:03:42.369 "block_size": 512, 00:03:42.369 "num_blocks": 16384, 00:03:42.369 "uuid": "ac5fb50b-9ab0-4c98-b23d-c1f1398a1b5d", 00:03:42.369 "assigned_rate_limits": { 00:03:42.369 "rw_ios_per_sec": 0, 00:03:42.369 "rw_mbytes_per_sec": 0, 00:03:42.369 "r_mbytes_per_sec": 0, 00:03:42.369 "w_mbytes_per_sec": 0 00:03:42.369 }, 00:03:42.369 "claimed": false, 00:03:42.369 "zoned": false, 00:03:42.369 "supported_io_types": { 00:03:42.369 "read": true, 00:03:42.369 "write": true, 00:03:42.369 "unmap": true, 00:03:42.369 "write_zeroes": true, 00:03:42.369 "flush": true, 00:03:42.369 "reset": true, 00:03:42.369 "compare": false, 00:03:42.369 "compare_and_write": false, 00:03:42.369 "abort": true, 00:03:42.369 "nvme_admin": false, 00:03:42.369 "nvme_io": false 00:03:42.369 }, 00:03:42.369 "memory_domains": [ 00:03:42.369 { 00:03:42.369 "dma_device_id": "system", 00:03:42.369 "dma_device_type": 1 00:03:42.369 }, 00:03:42.369 { 00:03:42.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.369 "dma_device_type": 2 00:03:42.369 } 00:03:42.369 ], 00:03:42.369 "driver_specific": {} 00:03:42.369 } 00:03:42.369 ]' 00:03:42.369 12:31:41 -- rpc/rpc.sh@17 -- # jq length 00:03:42.628 12:31:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:42.628 12:31:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:42.628 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.628 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.628 [2024-04-16 12:31:41.469950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:42.628 [2024-04-16 12:31:41.470002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:42.628 [2024-04-16 12:31:41.470032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc84f60 00:03:42.628 [2024-04-16 12:31:41.470048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:42.628 [2024-04-16 12:31:41.471424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:42.628 [2024-04-16 12:31:41.471454] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:42.628 Passthru0 00:03:42.628 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.628 12:31:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:42.628 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.628 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.628 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.628 12:31:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:42.628 { 00:03:42.628 "name": "Malloc2", 00:03:42.628 "aliases": [ 00:03:42.628 "ac5fb50b-9ab0-4c98-b23d-c1f1398a1b5d" 00:03:42.628 ], 00:03:42.628 "product_name": "Malloc disk", 00:03:42.628 "block_size": 512, 00:03:42.628 "num_blocks": 16384, 00:03:42.628 "uuid": "ac5fb50b-9ab0-4c98-b23d-c1f1398a1b5d", 00:03:42.628 "assigned_rate_limits": { 00:03:42.628 "rw_ios_per_sec": 0, 00:03:42.628 "rw_mbytes_per_sec": 0, 00:03:42.628 "r_mbytes_per_sec": 0, 00:03:42.628 "w_mbytes_per_sec": 0 00:03:42.628 }, 00:03:42.628 "claimed": true, 00:03:42.628 "claim_type": "exclusive_write", 00:03:42.628 "zoned": false, 00:03:42.628 "supported_io_types": { 00:03:42.628 "read": true, 00:03:42.628 "write": true, 00:03:42.628 "unmap": true, 00:03:42.628 "write_zeroes": true, 00:03:42.628 "flush": true, 00:03:42.628 "reset": true, 00:03:42.628 "compare": false, 00:03:42.628 "compare_and_write": false, 00:03:42.628 "abort": true, 00:03:42.628 "nvme_admin": false, 00:03:42.628 "nvme_io": false 00:03:42.628 }, 00:03:42.628 "memory_domains": [ 00:03:42.628 { 00:03:42.628 "dma_device_id": "system", 00:03:42.628 "dma_device_type": 1 00:03:42.628 }, 00:03:42.628 { 00:03:42.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.628 "dma_device_type": 2 00:03:42.628 } 00:03:42.628 ], 00:03:42.628 "driver_specific": {} 00:03:42.628 }, 00:03:42.628 { 00:03:42.628 "name": "Passthru0", 00:03:42.628 "aliases": [ 00:03:42.628 "ecd1d4e6-4bc7-545d-9527-6fc8f1085d8f" 00:03:42.628 ], 00:03:42.628 "product_name": "passthru", 00:03:42.628 "block_size": 512, 00:03:42.628 "num_blocks": 16384, 00:03:42.628 "uuid": "ecd1d4e6-4bc7-545d-9527-6fc8f1085d8f", 00:03:42.628 "assigned_rate_limits": { 00:03:42.628 "rw_ios_per_sec": 0, 00:03:42.628 "rw_mbytes_per_sec": 0, 00:03:42.628 "r_mbytes_per_sec": 0, 00:03:42.628 "w_mbytes_per_sec": 0 00:03:42.628 }, 00:03:42.628 "claimed": false, 00:03:42.628 "zoned": false, 00:03:42.628 "supported_io_types": { 00:03:42.628 "read": true, 00:03:42.628 "write": true, 00:03:42.628 "unmap": true, 00:03:42.628 "write_zeroes": true, 00:03:42.628 "flush": true, 00:03:42.628 "reset": true, 00:03:42.628 "compare": false, 00:03:42.628 "compare_and_write": false, 00:03:42.628 "abort": true, 00:03:42.628 "nvme_admin": false, 00:03:42.628 "nvme_io": false 00:03:42.628 }, 00:03:42.628 "memory_domains": [ 00:03:42.628 { 00:03:42.628 "dma_device_id": "system", 00:03:42.628 "dma_device_type": 1 00:03:42.628 }, 00:03:42.628 { 00:03:42.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.628 "dma_device_type": 2 00:03:42.628 } 00:03:42.628 ], 00:03:42.628 "driver_specific": { 00:03:42.628 "passthru": { 00:03:42.628 "name": "Passthru0", 00:03:42.628 "base_bdev_name": "Malloc2" 00:03:42.628 } 00:03:42.628 } 00:03:42.628 } 00:03:42.628 ]' 00:03:42.628 12:31:41 -- rpc/rpc.sh@21 -- # jq length 00:03:42.628 12:31:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:42.628 12:31:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:42.628 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.628 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.628 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.628 12:31:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:42.628 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.628 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.628 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.628 12:31:41 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:42.628 12:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:42.628 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.628 12:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:42.628 12:31:41 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:42.628 12:31:41 -- rpc/rpc.sh@26 -- # jq length 00:03:42.628 12:31:41 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.628 00:03:42.628 real 0m0.224s 00:03:42.628 user 0m0.150s 00:03:42.628 sys 0m0.020s 00:03:42.628 12:31:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:42.628 12:31:41 -- common/autotest_common.sh@10 -- # set +x 00:03:42.628 ************************************ 00:03:42.628 END TEST rpc_daemon_integrity 00:03:42.628 ************************************ 00:03:42.628 12:31:41 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:42.628 12:31:41 -- rpc/rpc.sh@84 -- # killprocess 1050766 00:03:42.628 12:31:41 -- common/autotest_common.sh@936 -- # '[' -z 1050766 ']' 00:03:42.628 12:31:41 -- common/autotest_common.sh@940 -- # kill -0 1050766 00:03:42.628 12:31:41 -- common/autotest_common.sh@941 -- # uname 00:03:42.628 12:31:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:42.628 12:31:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1050766 00:03:42.628 12:31:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:42.629 12:31:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:42.629 12:31:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1050766' 00:03:42.629 killing process with pid 1050766 00:03:42.629 12:31:41 -- common/autotest_common.sh@955 -- # kill 1050766 00:03:42.629 12:31:41 -- common/autotest_common.sh@960 -- # wait 1050766 00:03:43.195 00:03:43.195 real 0m2.286s 00:03:43.195 user 0m2.865s 00:03:43.195 sys 0m0.746s 00:03:43.195 12:31:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:43.195 12:31:42 -- common/autotest_common.sh@10 -- # set +x 00:03:43.195 ************************************ 00:03:43.195 END TEST rpc 00:03:43.195 ************************************ 00:03:43.195 12:31:42 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:43.195 12:31:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.195 12:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.195 12:31:42 -- common/autotest_common.sh@10 -- # set +x 00:03:43.195 ************************************ 00:03:43.195 START TEST skip_rpc 00:03:43.195 ************************************ 00:03:43.195 12:31:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:43.195 * Looking for test storage... 00:03:43.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:43.453 12:31:42 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.453 12:31:42 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:43.453 12:31:42 -- rpc/skip_rpc.sh@60 -- # run_test skip_rpc test_skip_rpc 00:03:43.453 12:31:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.453 12:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.453 12:31:42 -- common/autotest_common.sh@10 -- # set +x 00:03:43.453 ************************************ 00:03:43.453 START TEST skip_rpc 00:03:43.453 ************************************ 00:03:43.453 12:31:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:03:43.453 12:31:42 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1051256 00:03:43.453 12:31:42 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:43.453 12:31:42 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:43.453 12:31:42 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:43.453 [2024-04-16 12:31:42.404304] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:03:43.453 [2024-04-16 12:31:42.404382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051256 ] 00:03:43.453 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.453 [2024-04-16 12:31:42.470321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.712 [2024-04-16 12:31:42.586879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.712 [2024-04-16 12:31:42.586999] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:48.974 12:31:47 -- common/autotest_common.sh@638 -- # local es=0 00:03:48.974 12:31:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:48.974 12:31:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:03:48.974 12:31:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:48.974 12:31:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:03:48.974 12:31:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:48.974 12:31:47 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:03:48.974 12:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:48.974 12:31:47 -- common/autotest_common.sh@10 -- # set +x 00:03:48.974 12:31:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:48.974 12:31:47 -- common/autotest_common.sh@641 -- # es=1 00:03:48.974 12:31:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:48.974 12:31:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:48.974 12:31:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@23 -- # killprocess 1051256 00:03:48.974 12:31:47 -- common/autotest_common.sh@936 -- # '[' -z 1051256 ']' 00:03:48.974 12:31:47 -- common/autotest_common.sh@940 -- # kill -0 1051256 00:03:48.974 12:31:47 -- common/autotest_common.sh@941 -- # uname 00:03:48.974 12:31:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:48.974 12:31:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1051256 00:03:48.974 12:31:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:48.974 12:31:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:48.974 12:31:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1051256' 00:03:48.974 killing process with pid 1051256 00:03:48.974 12:31:47 -- common/autotest_common.sh@955 -- # kill 1051256 00:03:48.974 12:31:47 -- common/autotest_common.sh@960 -- # wait 1051256 00:03:48.974 00:03:48.974 real 0m5.490s 00:03:48.974 user 0m5.158s 00:03:48.974 sys 0m0.336s 00:03:48.974 12:31:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.974 12:31:47 -- common/autotest_common.sh@10 -- # set +x 00:03:48.974 ************************************ 00:03:48.974 END TEST skip_rpc 00:03:48.974 ************************************ 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@61 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:48.974 12:31:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.974 12:31:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.974 12:31:47 -- common/autotest_common.sh@10 -- # set +x 00:03:48.974 ************************************ 00:03:48.974 START TEST skip_rpc_with_json 00:03:48.974 ************************************ 00:03:48.974 12:31:47 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1051946 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.974 12:31:47 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1051946 00:03:48.974 12:31:47 -- common/autotest_common.sh@817 -- # '[' -z 1051946 ']' 00:03:48.974 12:31:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.974 12:31:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:48.974 12:31:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.974 12:31:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:48.974 12:31:47 -- common/autotest_common.sh@10 -- # set +x 00:03:48.974 [2024-04-16 12:31:48.019164] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:03:48.974 [2024-04-16 12:31:48.019240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051946 ] 00:03:49.232 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.232 [2024-04-16 12:31:48.088157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.232 [2024-04-16 12:31:48.195440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.491 12:31:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:49.491 12:31:48 -- common/autotest_common.sh@850 -- # return 0 00:03:49.491 12:31:48 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:49.491 12:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:49.491 12:31:48 -- common/autotest_common.sh@10 -- # set +x 00:03:49.491 [2024-04-16 12:31:48.462057] nvmf_rpc.c:2500:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:49.491 request: 00:03:49.491 { 00:03:49.491 "trtype": "tcp", 00:03:49.491 "method": "nvmf_get_transports", 00:03:49.491 "req_id": 1 00:03:49.491 } 00:03:49.491 Got JSON-RPC error response 00:03:49.491 response: 00:03:49.491 { 00:03:49.491 "code": -19, 00:03:49.491 "message": "No such device" 00:03:49.491 } 00:03:49.491 12:31:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:49.491 12:31:48 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:49.491 12:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:49.491 12:31:48 -- common/autotest_common.sh@10 -- # set +x 00:03:49.491 [2024-04-16 12:31:48.470275] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:49.491 12:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:49.491 12:31:48 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:49.491 12:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:49.491 12:31:48 -- common/autotest_common.sh@10 -- # set +x 00:03:49.749 12:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:49.749 12:31:48 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.749 { 00:03:49.749 "subsystems": [ 00:03:49.749 { 00:03:49.749 "subsystem": "vfio_user_target", 00:03:49.749 "config": null 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "keyring", 00:03:49.749 "config": [] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "iobuf", 00:03:49.749 "config": [ 00:03:49.749 { 00:03:49.749 "method": "iobuf_set_options", 00:03:49.749 "params": { 00:03:49.749 "small_pool_count": 8192, 00:03:49.749 "large_pool_count": 1024, 00:03:49.749 "small_bufsize": 8192, 00:03:49.749 "large_bufsize": 135168 00:03:49.749 } 00:03:49.749 } 00:03:49.749 ] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "sock", 00:03:49.749 "config": [ 00:03:49.749 { 00:03:49.749 "method": "sock_impl_set_options", 00:03:49.749 "params": { 00:03:49.749 "impl_name": "posix", 00:03:49.749 "recv_buf_size": 2097152, 00:03:49.749 "send_buf_size": 2097152, 00:03:49.749 "enable_recv_pipe": true, 00:03:49.749 "enable_quickack": false, 00:03:49.749 "enable_placement_id": 0, 00:03:49.749 "enable_zerocopy_send_server": true, 00:03:49.749 "enable_zerocopy_send_client": false, 00:03:49.749 "zerocopy_threshold": 0, 00:03:49.749 "tls_version": 0, 00:03:49.749 "enable_ktls": false 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "sock_impl_set_options", 00:03:49.749 "params": { 00:03:49.749 "impl_name": "ssl", 00:03:49.749 "recv_buf_size": 4096, 00:03:49.749 "send_buf_size": 4096, 00:03:49.749 "enable_recv_pipe": true, 00:03:49.749 "enable_quickack": false, 00:03:49.749 "enable_placement_id": 0, 00:03:49.749 "enable_zerocopy_send_server": true, 00:03:49.749 "enable_zerocopy_send_client": false, 00:03:49.749 "zerocopy_threshold": 0, 00:03:49.749 "tls_version": 0, 00:03:49.749 "enable_ktls": false 00:03:49.749 } 00:03:49.749 } 00:03:49.749 ] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "vmd", 00:03:49.749 "config": [] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "accel", 00:03:49.749 "config": [ 00:03:49.749 { 00:03:49.749 "method": "accel_set_options", 00:03:49.749 "params": { 00:03:49.749 "small_cache_size": 128, 00:03:49.749 "large_cache_size": 16, 00:03:49.749 "task_count": 2048, 00:03:49.749 "sequence_count": 2048, 00:03:49.749 "buf_count": 2048 00:03:49.749 } 00:03:49.749 } 00:03:49.749 ] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "bdev", 00:03:49.749 "config": [ 00:03:49.749 { 00:03:49.749 "method": "bdev_set_options", 00:03:49.749 "params": { 00:03:49.749 "bdev_io_pool_size": 65535, 00:03:49.749 "bdev_io_cache_size": 256, 00:03:49.749 "bdev_auto_examine": true, 00:03:49.749 "iobuf_small_cache_size": 128, 00:03:49.749 "iobuf_large_cache_size": 16 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "bdev_raid_set_options", 00:03:49.749 "params": { 00:03:49.749 "process_window_size_kb": 1024 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "bdev_iscsi_set_options", 00:03:49.749 "params": { 00:03:49.749 "timeout_sec": 30 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "bdev_nvme_set_options", 00:03:49.749 "params": { 00:03:49.749 "action_on_timeout": "none", 00:03:49.749 "timeout_us": 0, 00:03:49.749 "timeout_admin_us": 0, 00:03:49.749 "keep_alive_timeout_ms": 10000, 00:03:49.749 "arbitration_burst": 0, 00:03:49.749 "low_priority_weight": 0, 00:03:49.749 "medium_priority_weight": 0, 00:03:49.749 "high_priority_weight": 0, 00:03:49.749 "nvme_adminq_poll_period_us": 10000, 00:03:49.749 "nvme_ioq_poll_period_us": 0, 00:03:49.749 "io_queue_requests": 0, 00:03:49.749 "delay_cmd_submit": true, 00:03:49.749 "transport_retry_count": 4, 00:03:49.749 "bdev_retry_count": 3, 00:03:49.749 "transport_ack_timeout": 0, 00:03:49.749 "ctrlr_loss_timeout_sec": 0, 00:03:49.749 "reconnect_delay_sec": 0, 00:03:49.749 "fast_io_fail_timeout_sec": 0, 00:03:49.749 "disable_auto_failback": false, 00:03:49.749 "generate_uuids": false, 00:03:49.749 "transport_tos": 0, 00:03:49.749 "nvme_error_stat": false, 00:03:49.749 "rdma_srq_size": 0, 00:03:49.749 "io_path_stat": false, 00:03:49.749 "allow_accel_sequence": false, 00:03:49.749 "rdma_max_cq_size": 0, 00:03:49.749 "rdma_cm_event_timeout_ms": 0, 00:03:49.749 "dhchap_digests": [ 00:03:49.749 "sha256", 00:03:49.749 "sha384", 00:03:49.749 "sha512" 00:03:49.749 ], 00:03:49.749 "dhchap_dhgroups": [ 00:03:49.749 "null", 00:03:49.749 "ffdhe2048", 00:03:49.749 "ffdhe3072", 00:03:49.749 "ffdhe4096", 00:03:49.749 "ffdhe6144", 00:03:49.749 "ffdhe8192" 00:03:49.749 ] 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "bdev_nvme_set_hotplug", 00:03:49.749 "params": { 00:03:49.749 "period_us": 100000, 00:03:49.749 "enable": false 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "bdev_wait_for_examine" 00:03:49.749 } 00:03:49.749 ] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "scsi", 00:03:49.749 "config": null 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "scheduler", 00:03:49.749 "config": [ 00:03:49.749 { 00:03:49.749 "method": "framework_set_scheduler", 00:03:49.749 "params": { 00:03:49.749 "name": "static" 00:03:49.749 } 00:03:49.749 } 00:03:49.749 ] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "vhost_scsi", 00:03:49.749 "config": [] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "vhost_blk", 00:03:49.749 "config": [] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "ublk", 00:03:49.749 "config": [] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "nbd", 00:03:49.749 "config": [] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "nvmf", 00:03:49.749 "config": [ 00:03:49.749 { 00:03:49.749 "method": "nvmf_set_config", 00:03:49.749 "params": { 00:03:49.749 "discovery_filter": "match_any", 00:03:49.749 "admin_cmd_passthru": { 00:03:49.749 "identify_ctrlr": false 00:03:49.749 } 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "nvmf_set_max_subsystems", 00:03:49.749 "params": { 00:03:49.749 "max_subsystems": 1024 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "nvmf_set_crdt", 00:03:49.749 "params": { 00:03:49.749 "crdt1": 0, 00:03:49.749 "crdt2": 0, 00:03:49.749 "crdt3": 0 00:03:49.749 } 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "method": "nvmf_create_transport", 00:03:49.749 "params": { 00:03:49.749 "trtype": "TCP", 00:03:49.749 "max_queue_depth": 128, 00:03:49.749 "max_io_qpairs_per_ctrlr": 127, 00:03:49.749 "in_capsule_data_size": 4096, 00:03:49.749 "max_io_size": 131072, 00:03:49.749 "io_unit_size": 131072, 00:03:49.749 "max_aq_depth": 128, 00:03:49.749 "num_shared_buffers": 511, 00:03:49.749 "buf_cache_size": 4294967295, 00:03:49.749 "dif_insert_or_strip": false, 00:03:49.749 "zcopy": false, 00:03:49.749 "c2h_success": true, 00:03:49.749 "sock_priority": 0, 00:03:49.749 "abort_timeout_sec": 1, 00:03:49.749 "ack_timeout": 0 00:03:49.749 } 00:03:49.749 } 00:03:49.749 ] 00:03:49.749 }, 00:03:49.749 { 00:03:49.749 "subsystem": "iscsi", 00:03:49.749 "config": [ 00:03:49.749 { 00:03:49.749 "method": "iscsi_set_options", 00:03:49.749 "params": { 00:03:49.749 "node_base": "iqn.2016-06.io.spdk", 00:03:49.749 "max_sessions": 128, 00:03:49.749 "max_connections_per_session": 2, 00:03:49.749 "max_queue_depth": 64, 00:03:49.749 "default_time2wait": 2, 00:03:49.749 "default_time2retain": 20, 00:03:49.749 "first_burst_length": 8192, 00:03:49.749 "immediate_data": true, 00:03:49.749 "allow_duplicated_isid": false, 00:03:49.749 "error_recovery_level": 0, 00:03:49.749 "nop_timeout": 60, 00:03:49.749 "nop_in_interval": 30, 00:03:49.749 "disable_chap": false, 00:03:49.749 "require_chap": false, 00:03:49.749 "mutual_chap": false, 00:03:49.749 "chap_group": 0, 00:03:49.749 "max_large_datain_per_connection": 64, 00:03:49.749 "max_r2t_per_connection": 4, 00:03:49.749 "pdu_pool_size": 36864, 00:03:49.750 "immediate_data_pool_size": 16384, 00:03:49.750 "data_out_pool_size": 2048 00:03:49.750 } 00:03:49.750 } 00:03:49.750 ] 00:03:49.750 } 00:03:49.750 ] 00:03:49.750 } 00:03:49.750 12:31:48 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:49.750 12:31:48 -- rpc/skip_rpc.sh@40 -- # killprocess 1051946 00:03:49.750 12:31:48 -- common/autotest_common.sh@936 -- # '[' -z 1051946 ']' 00:03:49.750 12:31:48 -- common/autotest_common.sh@940 -- # kill -0 1051946 00:03:49.750 12:31:48 -- common/autotest_common.sh@941 -- # uname 00:03:49.750 12:31:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:49.750 12:31:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1051946 00:03:49.750 12:31:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:49.750 12:31:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:49.750 12:31:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1051946' 00:03:49.750 killing process with pid 1051946 00:03:49.750 12:31:48 -- common/autotest_common.sh@955 -- # kill 1051946 00:03:49.750 12:31:48 -- common/autotest_common.sh@960 -- # wait 1051946 00:03:50.348 12:31:49 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1052092 00:03:50.349 12:31:49 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.349 12:31:49 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:55.610 12:31:54 -- rpc/skip_rpc.sh@50 -- # killprocess 1052092 00:03:55.610 12:31:54 -- common/autotest_common.sh@936 -- # '[' -z 1052092 ']' 00:03:55.610 12:31:54 -- common/autotest_common.sh@940 -- # kill -0 1052092 00:03:55.610 12:31:54 -- common/autotest_common.sh@941 -- # uname 00:03:55.610 12:31:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:55.610 12:31:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1052092 00:03:55.610 12:31:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:55.610 12:31:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:55.610 12:31:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1052092' 00:03:55.610 killing process with pid 1052092 00:03:55.610 12:31:54 -- common/autotest_common.sh@955 -- # kill 1052092 00:03:55.610 12:31:54 -- common/autotest_common.sh@960 -- # wait 1052092 00:03:55.610 12:31:54 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:55.610 12:31:54 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:55.610 00:03:55.610 real 0m6.650s 00:03:55.610 user 0m6.228s 00:03:55.610 sys 0m0.705s 00:03:55.610 12:31:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.610 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:03:55.610 ************************************ 00:03:55.610 END TEST skip_rpc_with_json 00:03:55.610 ************************************ 00:03:55.610 12:31:54 -- rpc/skip_rpc.sh@62 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:55.610 12:31:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.610 12:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.610 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:03:55.868 ************************************ 00:03:55.868 START TEST skip_rpc_with_delay 00:03:55.868 ************************************ 00:03:55.868 12:31:54 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:03:55.868 12:31:54 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.868 12:31:54 -- common/autotest_common.sh@638 -- # local es=0 00:03:55.868 12:31:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.868 12:31:54 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.868 12:31:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:55.868 12:31:54 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.868 12:31:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:55.868 12:31:54 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.868 12:31:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:55.868 12:31:54 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.868 12:31:54 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:55.868 12:31:54 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.868 [2024-04-16 12:31:54.801459] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:55.868 [2024-04-16 12:31:54.801585] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:55.868 12:31:54 -- common/autotest_common.sh@641 -- # es=1 00:03:55.868 12:31:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:55.868 12:31:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:55.868 12:31:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:55.868 00:03:55.868 real 0m0.068s 00:03:55.868 user 0m0.045s 00:03:55.868 sys 0m0.023s 00:03:55.868 12:31:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.868 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:03:55.868 ************************************ 00:03:55.868 END TEST skip_rpc_with_delay 00:03:55.868 ************************************ 00:03:55.868 12:31:54 -- rpc/skip_rpc.sh@64 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.868 00:03:55.868 real 0m12.627s 00:03:55.868 user 0m11.575s 00:03:55.868 sys 0m1.315s 00:03:55.868 12:31:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.868 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:03:55.868 ************************************ 00:03:55.868 END TEST skip_rpc 00:03:55.868 ************************************ 00:03:55.868 12:31:54 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:55.868 12:31:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.868 12:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.868 12:31:54 -- common/autotest_common.sh@10 -- # set +x 00:03:56.127 ************************************ 00:03:56.127 START TEST rpc_client 00:03:56.127 ************************************ 00:03:56.127 12:31:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:56.127 * Looking for test storage... 00:03:56.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:56.127 12:31:55 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:56.127 OK 00:03:56.127 12:31:55 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:56.127 00:03:56.127 real 0m0.067s 00:03:56.127 user 0m0.027s 00:03:56.127 sys 0m0.046s 00:03:56.127 12:31:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.127 12:31:55 -- common/autotest_common.sh@10 -- # set +x 00:03:56.127 ************************************ 00:03:56.127 END TEST rpc_client 00:03:56.127 ************************************ 00:03:56.127 12:31:55 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:56.127 12:31:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.127 12:31:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.127 12:31:55 -- common/autotest_common.sh@10 -- # set +x 00:03:56.127 ************************************ 00:03:56.127 START TEST json_config 00:03:56.127 ************************************ 00:03:56.127 12:31:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:56.127 12:31:55 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:56.127 12:31:55 -- nvmf/common.sh@7 -- # uname -s 00:03:56.127 12:31:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:56.127 12:31:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:56.127 12:31:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:56.127 12:31:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:56.127 12:31:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:56.127 12:31:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:56.127 12:31:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:56.127 12:31:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:56.127 12:31:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:56.127 12:31:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:56.127 12:31:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:03:56.127 12:31:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:03:56.127 12:31:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:56.127 12:31:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:56.127 12:31:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:56.127 12:31:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:56.127 12:31:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:56.127 12:31:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:56.127 12:31:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.127 12:31:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.127 12:31:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.127 12:31:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.128 12:31:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.128 12:31:55 -- paths/export.sh@5 -- # export PATH 00:03:56.128 12:31:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.128 12:31:55 -- nvmf/common.sh@47 -- # : 0 00:03:56.128 12:31:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:56.128 12:31:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:56.128 12:31:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:56.128 12:31:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:56.128 12:31:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:56.128 12:31:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:56.128 12:31:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:56.128 12:31:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:56.128 12:31:55 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:56.128 12:31:55 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:56.128 12:31:55 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:56.128 12:31:55 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:56.128 12:31:55 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:56.128 12:31:55 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:56.128 12:31:55 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:56.128 12:31:55 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:56.128 12:31:55 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:56.128 12:31:55 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:56.128 12:31:55 -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:56.128 12:31:55 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:56.128 12:31:55 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:56.128 12:31:55 -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:56.128 12:31:55 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:56.128 12:31:55 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:56.128 INFO: JSON configuration test init 00:03:56.128 12:31:55 -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:56.128 12:31:55 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:56.128 12:31:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:56.128 12:31:55 -- common/autotest_common.sh@10 -- # set +x 00:03:56.128 12:31:55 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:56.128 12:31:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:56.128 12:31:55 -- common/autotest_common.sh@10 -- # set +x 00:03:56.128 12:31:55 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:56.128 12:31:55 -- json_config/common.sh@9 -- # local app=target 00:03:56.128 12:31:55 -- json_config/common.sh@10 -- # shift 00:03:56.128 12:31:55 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:56.128 12:31:55 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:56.128 12:31:55 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:56.128 12:31:55 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.128 12:31:55 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.128 12:31:55 -- json_config/common.sh@22 -- # app_pid["$app"]=1052925 00:03:56.128 12:31:55 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:56.128 12:31:55 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:56.128 Waiting for target to run... 00:03:56.128 12:31:55 -- json_config/common.sh@25 -- # waitforlisten 1052925 /var/tmp/spdk_tgt.sock 00:03:56.128 12:31:55 -- common/autotest_common.sh@817 -- # '[' -z 1052925 ']' 00:03:56.128 12:31:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:56.128 12:31:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:56.128 12:31:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:56.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:56.128 12:31:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:56.128 12:31:55 -- common/autotest_common.sh@10 -- # set +x 00:03:56.386 [2024-04-16 12:31:55.230744] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:03:56.386 [2024-04-16 12:31:55.230829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052925 ] 00:03:56.386 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.644 [2024-04-16 12:31:55.574703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.644 [2024-04-16 12:31:55.662020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.220 12:31:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:57.220 12:31:56 -- common/autotest_common.sh@850 -- # return 0 00:03:57.220 12:31:56 -- json_config/common.sh@26 -- # echo '' 00:03:57.220 00:03:57.220 12:31:56 -- json_config/json_config.sh@269 -- # create_accel_config 00:03:57.220 12:31:56 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:57.220 12:31:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:57.220 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:03:57.220 12:31:56 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:57.220 12:31:56 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:57.220 12:31:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:57.220 12:31:56 -- common/autotest_common.sh@10 -- # set +x 00:03:57.220 12:31:56 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:57.220 12:31:56 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:57.220 12:31:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:00.502 12:31:59 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:00.502 12:31:59 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:00.502 12:31:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:00.502 12:31:59 -- common/autotest_common.sh@10 -- # set +x 00:04:00.502 12:31:59 -- json_config/json_config.sh@45 -- # local ret=0 00:04:00.502 12:31:59 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:00.502 12:31:59 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:00.502 12:31:59 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:00.502 12:31:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:00.502 12:31:59 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:00.760 12:31:59 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:00.760 12:31:59 -- json_config/json_config.sh@48 -- # local get_types 00:04:00.760 12:31:59 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:00.760 12:31:59 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:00.760 12:31:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:00.760 12:31:59 -- common/autotest_common.sh@10 -- # set +x 00:04:00.760 12:31:59 -- json_config/json_config.sh@55 -- # return 0 00:04:00.760 12:31:59 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:00.760 12:31:59 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:00.760 12:31:59 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:00.760 12:31:59 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:00.760 12:31:59 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:00.760 12:31:59 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:00.760 12:31:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:00.760 12:31:59 -- common/autotest_common.sh@10 -- # set +x 00:04:00.760 12:31:59 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:00.760 12:31:59 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:00.760 12:31:59 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:00.760 12:31:59 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.760 12:31:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.760 MallocForNvmf0 00:04:01.017 12:31:59 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:01.017 12:31:59 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:01.017 MallocForNvmf1 00:04:01.017 12:32:00 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:01.017 12:32:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:01.275 [2024-04-16 12:32:00.309678] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:01.275 12:32:00 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:01.275 12:32:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:01.532 12:32:00 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.532 12:32:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.790 12:32:00 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.790 12:32:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:02.048 12:32:01 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:02.048 12:32:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:02.306 [2024-04-16 12:32:01.260960] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:02.306 12:32:01 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:02.306 12:32:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:02.306 12:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:02.306 12:32:01 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:02.306 12:32:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:02.306 12:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:02.306 12:32:01 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:02.306 12:32:01 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:02.306 12:32:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:02.571 MallocBdevForConfigChangeCheck 00:04:02.571 12:32:01 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:02.571 12:32:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:02.571 12:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:02.571 12:32:01 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:02.571 12:32:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.888 12:32:01 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:02.888 INFO: shutting down applications... 00:04:02.888 12:32:01 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:02.888 12:32:01 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:02.888 12:32:01 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:02.888 12:32:01 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:05.415 Calling clear_iscsi_subsystem 00:04:05.415 Calling clear_nvmf_subsystem 00:04:05.415 Calling clear_nbd_subsystem 00:04:05.415 Calling clear_ublk_subsystem 00:04:05.415 Calling clear_vhost_blk_subsystem 00:04:05.415 Calling clear_vhost_scsi_subsystem 00:04:05.415 Calling clear_bdev_subsystem 00:04:05.415 12:32:04 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:05.415 12:32:04 -- json_config/json_config.sh@343 -- # count=100 00:04:05.415 12:32:04 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:05.415 12:32:04 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.415 12:32:04 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:05.415 12:32:04 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:05.981 12:32:04 -- json_config/json_config.sh@345 -- # break 00:04:05.981 12:32:04 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:05.981 12:32:04 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:05.981 12:32:04 -- json_config/common.sh@31 -- # local app=target 00:04:05.981 12:32:04 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:05.981 12:32:04 -- json_config/common.sh@35 -- # [[ -n 1052925 ]] 00:04:05.981 12:32:04 -- json_config/common.sh@38 -- # kill -SIGINT 1052925 00:04:05.981 12:32:04 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:05.981 12:32:04 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.981 12:32:04 -- json_config/common.sh@41 -- # kill -0 1052925 00:04:05.981 12:32:04 -- json_config/common.sh@45 -- # sleep 0.5 00:04:06.550 12:32:05 -- json_config/common.sh@40 -- # (( i++ )) 00:04:06.550 12:32:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.550 12:32:05 -- json_config/common.sh@41 -- # kill -0 1052925 00:04:06.550 12:32:05 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:06.550 12:32:05 -- json_config/common.sh@43 -- # break 00:04:06.550 12:32:05 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:06.550 12:32:05 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:06.550 SPDK target shutdown done 00:04:06.550 12:32:05 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:06.550 INFO: relaunching applications... 00:04:06.550 12:32:05 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.550 12:32:05 -- json_config/common.sh@9 -- # local app=target 00:04:06.550 12:32:05 -- json_config/common.sh@10 -- # shift 00:04:06.550 12:32:05 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.550 12:32:05 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.550 12:32:05 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.550 12:32:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.550 12:32:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.550 12:32:05 -- json_config/common.sh@22 -- # app_pid["$app"]=1054245 00:04:06.550 12:32:05 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.550 12:32:05 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.550 Waiting for target to run... 00:04:06.550 12:32:05 -- json_config/common.sh@25 -- # waitforlisten 1054245 /var/tmp/spdk_tgt.sock 00:04:06.550 12:32:05 -- common/autotest_common.sh@817 -- # '[' -z 1054245 ']' 00:04:06.550 12:32:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.550 12:32:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:06.550 12:32:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.550 12:32:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:06.550 12:32:05 -- common/autotest_common.sh@10 -- # set +x 00:04:06.550 [2024-04-16 12:32:05.423512] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:06.550 [2024-04-16 12:32:05.423638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054245 ] 00:04:06.550 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.118 [2024-04-16 12:32:05.951628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.118 [2024-04-16 12:32:06.056650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.401 [2024-04-16 12:32:09.094451] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.401 [2024-04-16 12:32:09.127066] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:10.967 12:32:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:10.967 12:32:09 -- common/autotest_common.sh@850 -- # return 0 00:04:10.967 12:32:09 -- json_config/common.sh@26 -- # echo '' 00:04:10.967 00:04:10.967 12:32:09 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:10.967 12:32:09 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:10.967 INFO: Checking if target configuration is the same... 00:04:10.967 12:32:09 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.967 12:32:09 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:10.967 12:32:09 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.967 + '[' 2 -ne 2 ']' 00:04:10.967 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:10.967 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:10.967 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.967 +++ basename /dev/fd/62 00:04:10.967 ++ mktemp /tmp/62.XXX 00:04:10.967 + tmp_file_1=/tmp/62.sNZ 00:04:10.967 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.967 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:10.967 + tmp_file_2=/tmp/spdk_tgt_config.json.YjM 00:04:10.967 + ret=0 00:04:10.967 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.226 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.226 + diff -u /tmp/62.sNZ /tmp/spdk_tgt_config.json.YjM 00:04:11.226 + echo 'INFO: JSON config files are the same' 00:04:11.226 INFO: JSON config files are the same 00:04:11.226 + rm /tmp/62.sNZ /tmp/spdk_tgt_config.json.YjM 00:04:11.226 + exit 0 00:04:11.226 12:32:10 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:11.226 12:32:10 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:11.226 INFO: changing configuration and checking if this can be detected... 00:04:11.226 12:32:10 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.226 12:32:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.484 12:32:10 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.484 12:32:10 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:11.484 12:32:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.484 + '[' 2 -ne 2 ']' 00:04:11.484 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:11.484 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:11.484 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.484 +++ basename /dev/fd/62 00:04:11.484 ++ mktemp /tmp/62.XXX 00:04:11.484 + tmp_file_1=/tmp/62.POx 00:04:11.742 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.742 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.742 + tmp_file_2=/tmp/spdk_tgt_config.json.eqg 00:04:11.742 + ret=0 00:04:11.742 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.000 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.000 + diff -u /tmp/62.POx /tmp/spdk_tgt_config.json.eqg 00:04:12.000 + ret=1 00:04:12.000 + echo '=== Start of file: /tmp/62.POx ===' 00:04:12.000 + cat /tmp/62.POx 00:04:12.000 + echo '=== End of file: /tmp/62.POx ===' 00:04:12.000 + echo '' 00:04:12.000 + echo '=== Start of file: /tmp/spdk_tgt_config.json.eqg ===' 00:04:12.000 + cat /tmp/spdk_tgt_config.json.eqg 00:04:12.000 + echo '=== End of file: /tmp/spdk_tgt_config.json.eqg ===' 00:04:12.000 + echo '' 00:04:12.000 + rm /tmp/62.POx /tmp/spdk_tgt_config.json.eqg 00:04:12.000 + exit 1 00:04:12.000 12:32:10 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:12.000 INFO: configuration change detected. 00:04:12.000 12:32:10 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:12.000 12:32:10 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:12.000 12:32:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:12.000 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.000 12:32:10 -- json_config/json_config.sh@307 -- # local ret=0 00:04:12.000 12:32:10 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:12.000 12:32:10 -- json_config/json_config.sh@317 -- # [[ -n 1054245 ]] 00:04:12.000 12:32:10 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:12.000 12:32:10 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:12.000 12:32:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:12.000 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.000 12:32:10 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:12.000 12:32:10 -- json_config/json_config.sh@193 -- # uname -s 00:04:12.000 12:32:10 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:12.000 12:32:10 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:12.000 12:32:10 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:12.000 12:32:10 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:12.000 12:32:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:12.000 12:32:10 -- common/autotest_common.sh@10 -- # set +x 00:04:12.000 12:32:10 -- json_config/json_config.sh@323 -- # killprocess 1054245 00:04:12.000 12:32:10 -- common/autotest_common.sh@936 -- # '[' -z 1054245 ']' 00:04:12.000 12:32:10 -- common/autotest_common.sh@940 -- # kill -0 1054245 00:04:12.000 12:32:10 -- common/autotest_common.sh@941 -- # uname 00:04:12.000 12:32:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:12.000 12:32:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1054245 00:04:12.000 12:32:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:12.000 12:32:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:12.000 12:32:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1054245' 00:04:12.001 killing process with pid 1054245 00:04:12.001 12:32:11 -- common/autotest_common.sh@955 -- # kill 1054245 00:04:12.001 12:32:11 -- common/autotest_common.sh@960 -- # wait 1054245 00:04:14.564 12:32:13 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.564 12:32:13 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:14.564 12:32:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:14.564 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:04:14.823 12:32:13 -- json_config/json_config.sh@328 -- # return 0 00:04:14.823 12:32:13 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:14.823 INFO: Success 00:04:14.823 00:04:14.823 real 0m18.514s 00:04:14.823 user 0m20.479s 00:04:14.823 sys 0m2.005s 00:04:14.823 12:32:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:14.823 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:04:14.823 ************************************ 00:04:14.823 END TEST json_config 00:04:14.823 ************************************ 00:04:14.823 12:32:13 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.823 12:32:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.823 12:32:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.823 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:04:14.823 ************************************ 00:04:14.823 START TEST json_config_extra_key 00:04:14.823 ************************************ 00:04:14.823 12:32:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:14.823 12:32:13 -- nvmf/common.sh@7 -- # uname -s 00:04:14.823 12:32:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.823 12:32:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.823 12:32:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.823 12:32:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.823 12:32:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.823 12:32:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.823 12:32:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.823 12:32:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.823 12:32:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.823 12:32:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.823 12:32:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:04:14.823 12:32:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:04:14.823 12:32:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.823 12:32:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.823 12:32:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.823 12:32:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.823 12:32:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:14.823 12:32:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.823 12:32:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.823 12:32:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.823 12:32:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.823 12:32:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.823 12:32:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.823 12:32:13 -- paths/export.sh@5 -- # export PATH 00:04:14.823 12:32:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.823 12:32:13 -- nvmf/common.sh@47 -- # : 0 00:04:14.823 12:32:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:14.823 12:32:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:14.823 12:32:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.823 12:32:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.823 12:32:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.823 12:32:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:14.823 12:32:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:14.823 12:32:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:14.823 INFO: launching applications... 00:04:14.823 12:32:13 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.823 12:32:13 -- json_config/common.sh@9 -- # local app=target 00:04:14.823 12:32:13 -- json_config/common.sh@10 -- # shift 00:04:14.823 12:32:13 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.823 12:32:13 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.823 12:32:13 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.823 12:32:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.823 12:32:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.823 12:32:13 -- json_config/common.sh@22 -- # app_pid["$app"]=1055416 00:04:14.823 12:32:13 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.823 12:32:13 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.823 Waiting for target to run... 00:04:14.823 12:32:13 -- json_config/common.sh@25 -- # waitforlisten 1055416 /var/tmp/spdk_tgt.sock 00:04:14.823 12:32:13 -- common/autotest_common.sh@817 -- # '[' -z 1055416 ']' 00:04:14.823 12:32:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.823 12:32:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:14.823 12:32:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.823 12:32:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:14.823 12:32:13 -- common/autotest_common.sh@10 -- # set +x 00:04:14.823 [2024-04-16 12:32:13.870221] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:14.823 [2024-04-16 12:32:13.870301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055416 ] 00:04:15.082 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.340 [2024-04-16 12:32:14.241703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.340 [2024-04-16 12:32:14.327481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.905 12:32:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:15.905 12:32:14 -- common/autotest_common.sh@850 -- # return 0 00:04:15.905 12:32:14 -- json_config/common.sh@26 -- # echo '' 00:04:15.905 00:04:15.905 12:32:14 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:15.905 INFO: shutting down applications... 00:04:15.905 12:32:14 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:15.905 12:32:14 -- json_config/common.sh@31 -- # local app=target 00:04:15.905 12:32:14 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:15.905 12:32:14 -- json_config/common.sh@35 -- # [[ -n 1055416 ]] 00:04:15.905 12:32:14 -- json_config/common.sh@38 -- # kill -SIGINT 1055416 00:04:15.905 12:32:14 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:15.905 12:32:14 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.905 12:32:14 -- json_config/common.sh@41 -- # kill -0 1055416 00:04:15.905 12:32:14 -- json_config/common.sh@45 -- # sleep 0.5 00:04:16.471 12:32:15 -- json_config/common.sh@40 -- # (( i++ )) 00:04:16.471 12:32:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:16.471 12:32:15 -- json_config/common.sh@41 -- # kill -0 1055416 00:04:16.471 12:32:15 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:16.471 12:32:15 -- json_config/common.sh@43 -- # break 00:04:16.471 12:32:15 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:16.471 12:32:15 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:16.471 SPDK target shutdown done 00:04:16.471 12:32:15 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:16.471 Success 00:04:16.471 00:04:16.471 real 0m1.583s 00:04:16.471 user 0m1.606s 00:04:16.471 sys 0m0.458s 00:04:16.471 12:32:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.471 12:32:15 -- common/autotest_common.sh@10 -- # set +x 00:04:16.471 ************************************ 00:04:16.471 END TEST json_config_extra_key 00:04:16.471 ************************************ 00:04:16.471 12:32:15 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:16.471 12:32:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.471 12:32:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.471 12:32:15 -- common/autotest_common.sh@10 -- # set +x 00:04:16.471 ************************************ 00:04:16.471 START TEST alias_rpc 00:04:16.471 ************************************ 00:04:16.471 12:32:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:16.471 * Looking for test storage... 00:04:16.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:16.471 12:32:15 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:16.471 12:32:15 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1055620 00:04:16.471 12:32:15 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.471 12:32:15 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1055620 00:04:16.471 12:32:15 -- common/autotest_common.sh@817 -- # '[' -z 1055620 ']' 00:04:16.471 12:32:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.471 12:32:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:16.471 12:32:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.471 12:32:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:16.471 12:32:15 -- common/autotest_common.sh@10 -- # set +x 00:04:16.729 [2024-04-16 12:32:15.578508] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:16.729 [2024-04-16 12:32:15.578622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055620 ] 00:04:16.729 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.729 [2024-04-16 12:32:15.644609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.729 [2024-04-16 12:32:15.747596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.987 12:32:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:16.987 12:32:16 -- common/autotest_common.sh@850 -- # return 0 00:04:16.987 12:32:16 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:17.245 12:32:16 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1055620 00:04:17.245 12:32:16 -- common/autotest_common.sh@936 -- # '[' -z 1055620 ']' 00:04:17.245 12:32:16 -- common/autotest_common.sh@940 -- # kill -0 1055620 00:04:17.245 12:32:16 -- common/autotest_common.sh@941 -- # uname 00:04:17.245 12:32:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:17.245 12:32:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1055620 00:04:17.245 12:32:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:17.245 12:32:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:17.245 12:32:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1055620' 00:04:17.245 killing process with pid 1055620 00:04:17.245 12:32:16 -- common/autotest_common.sh@955 -- # kill 1055620 00:04:17.245 12:32:16 -- common/autotest_common.sh@960 -- # wait 1055620 00:04:17.812 00:04:17.812 real 0m1.287s 00:04:17.812 user 0m1.336s 00:04:17.812 sys 0m0.433s 00:04:17.812 12:32:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:17.812 12:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:17.812 ************************************ 00:04:17.812 END TEST alias_rpc 00:04:17.812 ************************************ 00:04:17.812 12:32:16 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:17.812 12:32:16 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:17.812 12:32:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.812 12:32:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.812 12:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:17.812 ************************************ 00:04:17.812 START TEST spdkcli_tcp 00:04:17.812 ************************************ 00:04:17.812 12:32:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:18.071 * Looking for test storage... 00:04:18.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:18.071 12:32:16 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:18.071 12:32:16 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:18.071 12:32:16 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:18.071 12:32:16 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:18.071 12:32:16 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:18.071 12:32:16 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:18.071 12:32:16 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:18.071 12:32:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:18.071 12:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.071 12:32:16 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1055815 00:04:18.071 12:32:16 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:18.071 12:32:16 -- spdkcli/tcp.sh@27 -- # waitforlisten 1055815 00:04:18.071 12:32:16 -- common/autotest_common.sh@817 -- # '[' -z 1055815 ']' 00:04:18.071 12:32:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.071 12:32:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:18.071 12:32:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.071 12:32:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:18.071 12:32:16 -- common/autotest_common.sh@10 -- # set +x 00:04:18.071 [2024-04-16 12:32:16.987789] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:18.071 [2024-04-16 12:32:16.987886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055815 ] 00:04:18.071 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.071 [2024-04-16 12:32:17.058140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.329 [2024-04-16 12:32:17.164349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.329 [2024-04-16 12:32:17.164353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.588 12:32:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:18.588 12:32:17 -- common/autotest_common.sh@850 -- # return 0 00:04:18.588 12:32:17 -- spdkcli/tcp.sh@31 -- # socat_pid=1055944 00:04:18.588 12:32:17 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:18.588 12:32:17 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:18.588 [ 00:04:18.588 "bdev_malloc_delete", 00:04:18.588 "bdev_malloc_create", 00:04:18.588 "bdev_null_resize", 00:04:18.588 "bdev_null_delete", 00:04:18.588 "bdev_null_create", 00:04:18.588 "bdev_nvme_cuse_unregister", 00:04:18.588 "bdev_nvme_cuse_register", 00:04:18.588 "bdev_opal_new_user", 00:04:18.588 "bdev_opal_set_lock_state", 00:04:18.588 "bdev_opal_delete", 00:04:18.588 "bdev_opal_get_info", 00:04:18.588 "bdev_opal_create", 00:04:18.588 "bdev_nvme_opal_revert", 00:04:18.588 "bdev_nvme_opal_init", 00:04:18.588 "bdev_nvme_send_cmd", 00:04:18.588 "bdev_nvme_get_path_iostat", 00:04:18.588 "bdev_nvme_get_mdns_discovery_info", 00:04:18.588 "bdev_nvme_stop_mdns_discovery", 00:04:18.588 "bdev_nvme_start_mdns_discovery", 00:04:18.588 "bdev_nvme_set_multipath_policy", 00:04:18.588 "bdev_nvme_set_preferred_path", 00:04:18.588 "bdev_nvme_get_io_paths", 00:04:18.588 "bdev_nvme_remove_error_injection", 00:04:18.588 "bdev_nvme_add_error_injection", 00:04:18.588 "bdev_nvme_get_discovery_info", 00:04:18.588 "bdev_nvme_stop_discovery", 00:04:18.588 "bdev_nvme_start_discovery", 00:04:18.588 "bdev_nvme_get_controller_health_info", 00:04:18.588 "bdev_nvme_disable_controller", 00:04:18.588 "bdev_nvme_enable_controller", 00:04:18.588 "bdev_nvme_reset_controller", 00:04:18.588 "bdev_nvme_get_transport_statistics", 00:04:18.588 "bdev_nvme_apply_firmware", 00:04:18.588 "bdev_nvme_detach_controller", 00:04:18.588 "bdev_nvme_get_controllers", 00:04:18.588 "bdev_nvme_attach_controller", 00:04:18.588 "bdev_nvme_set_hotplug", 00:04:18.588 "bdev_nvme_set_options", 00:04:18.588 "bdev_passthru_delete", 00:04:18.588 "bdev_passthru_create", 00:04:18.588 "bdev_lvol_grow_lvstore", 00:04:18.588 "bdev_lvol_get_lvols", 00:04:18.588 "bdev_lvol_get_lvstores", 00:04:18.588 "bdev_lvol_delete", 00:04:18.588 "bdev_lvol_set_read_only", 00:04:18.588 "bdev_lvol_resize", 00:04:18.588 "bdev_lvol_decouple_parent", 00:04:18.588 "bdev_lvol_inflate", 00:04:18.588 "bdev_lvol_rename", 00:04:18.588 "bdev_lvol_clone_bdev", 00:04:18.588 "bdev_lvol_clone", 00:04:18.588 "bdev_lvol_snapshot", 00:04:18.588 "bdev_lvol_create", 00:04:18.588 "bdev_lvol_delete_lvstore", 00:04:18.588 "bdev_lvol_rename_lvstore", 00:04:18.588 "bdev_lvol_create_lvstore", 00:04:18.588 "bdev_raid_set_options", 00:04:18.588 "bdev_raid_remove_base_bdev", 00:04:18.588 "bdev_raid_add_base_bdev", 00:04:18.588 "bdev_raid_delete", 00:04:18.588 "bdev_raid_create", 00:04:18.588 "bdev_raid_get_bdevs", 00:04:18.588 "bdev_error_inject_error", 00:04:18.588 "bdev_error_delete", 00:04:18.588 "bdev_error_create", 00:04:18.588 "bdev_split_delete", 00:04:18.588 "bdev_split_create", 00:04:18.588 "bdev_delay_delete", 00:04:18.588 "bdev_delay_create", 00:04:18.588 "bdev_delay_update_latency", 00:04:18.588 "bdev_zone_block_delete", 00:04:18.588 "bdev_zone_block_create", 00:04:18.588 "blobfs_create", 00:04:18.588 "blobfs_detect", 00:04:18.588 "blobfs_set_cache_size", 00:04:18.588 "bdev_aio_delete", 00:04:18.588 "bdev_aio_rescan", 00:04:18.588 "bdev_aio_create", 00:04:18.588 "bdev_ftl_set_property", 00:04:18.588 "bdev_ftl_get_properties", 00:04:18.588 "bdev_ftl_get_stats", 00:04:18.588 "bdev_ftl_unmap", 00:04:18.588 "bdev_ftl_unload", 00:04:18.588 "bdev_ftl_delete", 00:04:18.588 "bdev_ftl_load", 00:04:18.588 "bdev_ftl_create", 00:04:18.588 "bdev_virtio_attach_controller", 00:04:18.588 "bdev_virtio_scsi_get_devices", 00:04:18.588 "bdev_virtio_detach_controller", 00:04:18.588 "bdev_virtio_blk_set_hotplug", 00:04:18.588 "bdev_iscsi_delete", 00:04:18.588 "bdev_iscsi_create", 00:04:18.588 "bdev_iscsi_set_options", 00:04:18.588 "accel_error_inject_error", 00:04:18.588 "ioat_scan_accel_module", 00:04:18.588 "dsa_scan_accel_module", 00:04:18.588 "iaa_scan_accel_module", 00:04:18.588 "vfu_virtio_create_scsi_endpoint", 00:04:18.588 "vfu_virtio_scsi_remove_target", 00:04:18.588 "vfu_virtio_scsi_add_target", 00:04:18.588 "vfu_virtio_create_blk_endpoint", 00:04:18.588 "vfu_virtio_delete_endpoint", 00:04:18.588 "keyring_file_remove_key", 00:04:18.588 "keyring_file_add_key", 00:04:18.588 "iscsi_set_options", 00:04:18.588 "iscsi_get_auth_groups", 00:04:18.588 "iscsi_auth_group_remove_secret", 00:04:18.588 "iscsi_auth_group_add_secret", 00:04:18.588 "iscsi_delete_auth_group", 00:04:18.588 "iscsi_create_auth_group", 00:04:18.588 "iscsi_set_discovery_auth", 00:04:18.588 "iscsi_get_options", 00:04:18.588 "iscsi_target_node_request_logout", 00:04:18.588 "iscsi_target_node_set_redirect", 00:04:18.588 "iscsi_target_node_set_auth", 00:04:18.588 "iscsi_target_node_add_lun", 00:04:18.588 "iscsi_get_stats", 00:04:18.588 "iscsi_get_connections", 00:04:18.588 "iscsi_portal_group_set_auth", 00:04:18.588 "iscsi_start_portal_group", 00:04:18.588 "iscsi_delete_portal_group", 00:04:18.588 "iscsi_create_portal_group", 00:04:18.588 "iscsi_get_portal_groups", 00:04:18.588 "iscsi_delete_target_node", 00:04:18.588 "iscsi_target_node_remove_pg_ig_maps", 00:04:18.588 "iscsi_target_node_add_pg_ig_maps", 00:04:18.588 "iscsi_create_target_node", 00:04:18.588 "iscsi_get_target_nodes", 00:04:18.588 "iscsi_delete_initiator_group", 00:04:18.588 "iscsi_initiator_group_remove_initiators", 00:04:18.588 "iscsi_initiator_group_add_initiators", 00:04:18.588 "iscsi_create_initiator_group", 00:04:18.588 "iscsi_get_initiator_groups", 00:04:18.588 "nvmf_set_crdt", 00:04:18.588 "nvmf_set_config", 00:04:18.588 "nvmf_set_max_subsystems", 00:04:18.588 "nvmf_subsystem_get_listeners", 00:04:18.588 "nvmf_subsystem_get_qpairs", 00:04:18.588 "nvmf_subsystem_get_controllers", 00:04:18.588 "nvmf_get_stats", 00:04:18.588 "nvmf_get_transports", 00:04:18.588 "nvmf_create_transport", 00:04:18.588 "nvmf_get_targets", 00:04:18.588 "nvmf_delete_target", 00:04:18.588 "nvmf_create_target", 00:04:18.588 "nvmf_subsystem_allow_any_host", 00:04:18.588 "nvmf_subsystem_remove_host", 00:04:18.588 "nvmf_subsystem_add_host", 00:04:18.588 "nvmf_ns_remove_host", 00:04:18.588 "nvmf_ns_add_host", 00:04:18.588 "nvmf_subsystem_remove_ns", 00:04:18.588 "nvmf_subsystem_add_ns", 00:04:18.588 "nvmf_subsystem_listener_set_ana_state", 00:04:18.588 "nvmf_discovery_get_referrals", 00:04:18.588 "nvmf_discovery_remove_referral", 00:04:18.588 "nvmf_discovery_add_referral", 00:04:18.588 "nvmf_subsystem_remove_listener", 00:04:18.588 "nvmf_subsystem_add_listener", 00:04:18.588 "nvmf_delete_subsystem", 00:04:18.588 "nvmf_create_subsystem", 00:04:18.588 "nvmf_get_subsystems", 00:04:18.588 "env_dpdk_get_mem_stats", 00:04:18.588 "nbd_get_disks", 00:04:18.588 "nbd_stop_disk", 00:04:18.588 "nbd_start_disk", 00:04:18.588 "ublk_recover_disk", 00:04:18.588 "ublk_get_disks", 00:04:18.588 "ublk_stop_disk", 00:04:18.588 "ublk_start_disk", 00:04:18.588 "ublk_destroy_target", 00:04:18.588 "ublk_create_target", 00:04:18.588 "virtio_blk_create_transport", 00:04:18.588 "virtio_blk_get_transports", 00:04:18.588 "vhost_controller_set_coalescing", 00:04:18.588 "vhost_get_controllers", 00:04:18.589 "vhost_delete_controller", 00:04:18.589 "vhost_create_blk_controller", 00:04:18.589 "vhost_scsi_controller_remove_target", 00:04:18.589 "vhost_scsi_controller_add_target", 00:04:18.589 "vhost_start_scsi_controller", 00:04:18.589 "vhost_create_scsi_controller", 00:04:18.589 "thread_set_cpumask", 00:04:18.589 "framework_get_scheduler", 00:04:18.589 "framework_set_scheduler", 00:04:18.589 "framework_get_reactors", 00:04:18.589 "thread_get_io_channels", 00:04:18.589 "thread_get_pollers", 00:04:18.589 "thread_get_stats", 00:04:18.589 "framework_monitor_context_switch", 00:04:18.589 "spdk_kill_instance", 00:04:18.589 "log_enable_timestamps", 00:04:18.589 "log_get_flags", 00:04:18.589 "log_clear_flag", 00:04:18.589 "log_set_flag", 00:04:18.589 "log_get_level", 00:04:18.589 "log_set_level", 00:04:18.589 "log_get_print_level", 00:04:18.589 "log_set_print_level", 00:04:18.589 "framework_enable_cpumask_locks", 00:04:18.589 "framework_disable_cpumask_locks", 00:04:18.589 "framework_wait_init", 00:04:18.589 "framework_start_init", 00:04:18.589 "scsi_get_devices", 00:04:18.589 "bdev_get_histogram", 00:04:18.589 "bdev_enable_histogram", 00:04:18.589 "bdev_set_qos_limit", 00:04:18.589 "bdev_set_qd_sampling_period", 00:04:18.589 "bdev_get_bdevs", 00:04:18.589 "bdev_reset_iostat", 00:04:18.589 "bdev_get_iostat", 00:04:18.589 "bdev_examine", 00:04:18.589 "bdev_wait_for_examine", 00:04:18.589 "bdev_set_options", 00:04:18.589 "notify_get_notifications", 00:04:18.589 "notify_get_types", 00:04:18.589 "accel_get_stats", 00:04:18.589 "accel_set_options", 00:04:18.589 "accel_set_driver", 00:04:18.589 "accel_crypto_key_destroy", 00:04:18.589 "accel_crypto_keys_get", 00:04:18.589 "accel_crypto_key_create", 00:04:18.589 "accel_assign_opc", 00:04:18.589 "accel_get_module_info", 00:04:18.589 "accel_get_opc_assignments", 00:04:18.589 "vmd_rescan", 00:04:18.589 "vmd_remove_device", 00:04:18.589 "vmd_enable", 00:04:18.589 "sock_set_default_impl", 00:04:18.589 "sock_impl_set_options", 00:04:18.589 "sock_impl_get_options", 00:04:18.589 "iobuf_get_stats", 00:04:18.589 "iobuf_set_options", 00:04:18.589 "keyring_get_keys", 00:04:18.589 "framework_get_pci_devices", 00:04:18.589 "framework_get_config", 00:04:18.589 "framework_get_subsystems", 00:04:18.589 "vfu_tgt_set_base_path", 00:04:18.589 "trace_get_info", 00:04:18.589 "trace_get_tpoint_group_mask", 00:04:18.589 "trace_disable_tpoint_group", 00:04:18.589 "trace_enable_tpoint_group", 00:04:18.589 "trace_clear_tpoint_mask", 00:04:18.589 "trace_set_tpoint_mask", 00:04:18.589 "spdk_get_version", 00:04:18.589 "rpc_get_methods" 00:04:18.589 ] 00:04:18.847 12:32:17 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:18.847 12:32:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:18.847 12:32:17 -- common/autotest_common.sh@10 -- # set +x 00:04:18.847 12:32:17 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:18.847 12:32:17 -- spdkcli/tcp.sh@38 -- # killprocess 1055815 00:04:18.847 12:32:17 -- common/autotest_common.sh@936 -- # '[' -z 1055815 ']' 00:04:18.847 12:32:17 -- common/autotest_common.sh@940 -- # kill -0 1055815 00:04:18.847 12:32:17 -- common/autotest_common.sh@941 -- # uname 00:04:18.847 12:32:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:18.847 12:32:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1055815 00:04:18.847 12:32:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:18.847 12:32:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:18.847 12:32:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1055815' 00:04:18.847 killing process with pid 1055815 00:04:18.847 12:32:17 -- common/autotest_common.sh@955 -- # kill 1055815 00:04:18.847 12:32:17 -- common/autotest_common.sh@960 -- # wait 1055815 00:04:19.105 00:04:19.105 real 0m1.286s 00:04:19.105 user 0m2.198s 00:04:19.105 sys 0m0.473s 00:04:19.105 12:32:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:19.105 12:32:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.105 ************************************ 00:04:19.105 END TEST spdkcli_tcp 00:04:19.105 ************************************ 00:04:19.363 12:32:18 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.364 12:32:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.364 12:32:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.364 12:32:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.364 ************************************ 00:04:19.364 START TEST dpdk_mem_utility 00:04:19.364 ************************************ 00:04:19.364 12:32:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.364 * Looking for test storage... 00:04:19.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:19.364 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.364 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1056143 00:04:19.364 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.364 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1056143 00:04:19.364 12:32:18 -- common/autotest_common.sh@817 -- # '[' -z 1056143 ']' 00:04:19.364 12:32:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.364 12:32:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:19.364 12:32:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.364 12:32:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:19.364 12:32:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.364 [2024-04-16 12:32:18.395728] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:19.364 [2024-04-16 12:32:18.395820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056143 ] 00:04:19.364 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.622 [2024-04-16 12:32:18.464039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.622 [2024-04-16 12:32:18.573759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.880 12:32:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:19.880 12:32:18 -- common/autotest_common.sh@850 -- # return 0 00:04:19.880 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:19.880 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:19.880 12:32:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:19.880 12:32:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.880 { 00:04:19.880 "filename": "/tmp/spdk_mem_dump.txt" 00:04:19.880 } 00:04:19.880 12:32:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:19.880 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.881 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:19.881 1 heaps totaling size 814.000000 MiB 00:04:19.881 size: 814.000000 MiB heap id: 0 00:04:19.881 end heaps---------- 00:04:19.881 8 mempools totaling size 598.116089 MiB 00:04:19.881 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:19.881 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:19.881 size: 84.521057 MiB name: bdev_io_1056143 00:04:19.881 size: 51.011292 MiB name: evtpool_1056143 00:04:19.881 size: 50.003479 MiB name: msgpool_1056143 00:04:19.881 size: 21.763794 MiB name: PDU_Pool 00:04:19.881 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:19.881 size: 0.026123 MiB name: Session_Pool 00:04:19.881 end mempools------- 00:04:19.881 6 memzones totaling size 4.142822 MiB 00:04:19.881 size: 1.000366 MiB name: RG_ring_0_1056143 00:04:19.881 size: 1.000366 MiB name: RG_ring_1_1056143 00:04:19.881 size: 1.000366 MiB name: RG_ring_4_1056143 00:04:19.881 size: 1.000366 MiB name: RG_ring_5_1056143 00:04:19.881 size: 0.125366 MiB name: RG_ring_2_1056143 00:04:19.881 size: 0.015991 MiB name: RG_ring_3_1056143 00:04:19.881 end memzones------- 00:04:19.881 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:19.881 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:19.881 list of free elements. size: 12.519348 MiB 00:04:19.881 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:19.881 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:19.881 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:19.881 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:19.881 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:19.881 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:19.881 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:19.881 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:19.881 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:19.881 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:19.881 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:19.881 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:19.881 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:19.881 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:19.881 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:19.881 list of standard malloc elements. size: 199.218079 MiB 00:04:19.881 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:19.881 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:19.881 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:19.881 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:19.881 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:19.881 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:19.881 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:19.881 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:19.881 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:19.881 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:19.881 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:19.881 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:19.881 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:19.881 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:19.881 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:19.881 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:19.881 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:19.881 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:19.881 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:19.881 list of memzone associated elements. size: 602.262573 MiB 00:04:19.881 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:19.881 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:19.881 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:19.881 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:19.881 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:19.881 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1056143_0 00:04:19.881 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:19.881 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1056143_0 00:04:19.881 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:19.881 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1056143_0 00:04:19.881 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:19.881 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:19.881 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:19.881 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:19.881 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:19.881 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1056143 00:04:19.881 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:19.881 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1056143 00:04:19.881 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:19.881 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1056143 00:04:19.881 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:19.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:19.881 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:19.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:19.881 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:19.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:19.881 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:19.881 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:19.881 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:19.881 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1056143 00:04:19.881 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:19.881 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1056143 00:04:19.881 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:19.881 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1056143 00:04:19.881 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:19.881 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1056143 00:04:19.881 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:19.881 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1056143 00:04:19.881 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:19.881 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:19.881 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:19.881 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:19.881 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:19.881 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:19.881 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:19.881 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1056143 00:04:19.881 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:19.881 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:19.881 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:19.881 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:19.881 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:19.881 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1056143 00:04:19.881 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:19.881 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:19.881 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:19.881 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1056143 00:04:19.881 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:19.881 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1056143 00:04:19.881 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:19.881 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:19.881 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:19.881 12:32:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1056143 00:04:19.881 12:32:18 -- common/autotest_common.sh@936 -- # '[' -z 1056143 ']' 00:04:19.881 12:32:18 -- common/autotest_common.sh@940 -- # kill -0 1056143 00:04:19.881 12:32:18 -- common/autotest_common.sh@941 -- # uname 00:04:19.881 12:32:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:19.881 12:32:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1056143 00:04:20.140 12:32:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:20.140 12:32:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:20.140 12:32:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1056143' 00:04:20.140 killing process with pid 1056143 00:04:20.140 12:32:18 -- common/autotest_common.sh@955 -- # kill 1056143 00:04:20.140 12:32:18 -- common/autotest_common.sh@960 -- # wait 1056143 00:04:20.398 00:04:20.398 real 0m1.134s 00:04:20.398 user 0m1.091s 00:04:20.398 sys 0m0.406s 00:04:20.398 12:32:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:20.398 12:32:19 -- common/autotest_common.sh@10 -- # set +x 00:04:20.398 ************************************ 00:04:20.398 END TEST dpdk_mem_utility 00:04:20.398 ************************************ 00:04:20.398 12:32:19 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.398 12:32:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.398 12:32:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.398 12:32:19 -- common/autotest_common.sh@10 -- # set +x 00:04:20.656 ************************************ 00:04:20.656 START TEST event 00:04:20.656 ************************************ 00:04:20.656 12:32:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.656 * Looking for test storage... 00:04:20.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:20.656 12:32:19 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:20.656 12:32:19 -- bdev/nbd_common.sh@6 -- # set -e 00:04:20.656 12:32:19 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.656 12:32:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:20.656 12:32:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.656 12:32:19 -- common/autotest_common.sh@10 -- # set +x 00:04:20.656 ************************************ 00:04:20.656 START TEST event_perf 00:04:20.656 ************************************ 00:04:20.656 12:32:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.656 Running I/O for 1 seconds...[2024-04-16 12:32:19.702192] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:20.656 [2024-04-16 12:32:19.702243] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056354 ] 00:04:20.914 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.914 [2024-04-16 12:32:19.775445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.914 [2024-04-16 12:32:19.893261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.914 [2024-04-16 12:32:19.893312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.914 [2024-04-16 12:32:19.893428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.914 [2024-04-16 12:32:19.893431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.914 [2024-04-16 12:32:19.893640] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:04:22.287 Running I/O for 1 seconds... 00:04:22.287 lcore 0: 233185 00:04:22.287 lcore 1: 233185 00:04:22.287 lcore 2: 233185 00:04:22.287 lcore 3: 233185 00:04:22.287 done. 00:04:22.287 00:04:22.287 real 0m1.327s 00:04:22.287 user 0m4.229s 00:04:22.287 sys 0m0.092s 00:04:22.287 12:32:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.287 12:32:21 -- common/autotest_common.sh@10 -- # set +x 00:04:22.287 ************************************ 00:04:22.287 END TEST event_perf 00:04:22.287 ************************************ 00:04:22.287 12:32:21 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:22.287 12:32:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:22.287 12:32:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.287 12:32:21 -- common/autotest_common.sh@10 -- # set +x 00:04:22.287 ************************************ 00:04:22.287 START TEST event_reactor 00:04:22.287 ************************************ 00:04:22.287 12:32:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:22.287 [2024-04-16 12:32:21.151909] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:22.287 [2024-04-16 12:32:21.151972] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056518 ] 00:04:22.287 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.287 [2024-04-16 12:32:21.225000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.287 [2024-04-16 12:32:21.338589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.287 [2024-04-16 12:32:21.338703] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:04:23.666 test_start 00:04:23.666 oneshot 00:04:23.666 tick 100 00:04:23.666 tick 100 00:04:23.666 tick 250 00:04:23.666 tick 100 00:04:23.666 tick 100 00:04:23.666 tick 100 00:04:23.666 tick 250 00:04:23.666 tick 500 00:04:23.666 tick 100 00:04:23.666 tick 100 00:04:23.666 tick 250 00:04:23.666 tick 100 00:04:23.666 tick 100 00:04:23.666 test_end 00:04:23.666 00:04:23.666 real 0m1.317s 00:04:23.666 user 0m1.223s 00:04:23.666 sys 0m0.090s 00:04:23.666 12:32:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.666 12:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:23.666 ************************************ 00:04:23.666 END TEST event_reactor 00:04:23.666 ************************************ 00:04:23.666 12:32:22 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.666 12:32:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:23.666 12:32:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.666 12:32:22 -- common/autotest_common.sh@10 -- # set +x 00:04:23.666 ************************************ 00:04:23.666 START TEST event_reactor_perf 00:04:23.666 ************************************ 00:04:23.666 12:32:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.666 [2024-04-16 12:32:22.582331] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:23.666 [2024-04-16 12:32:22.582399] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056688 ] 00:04:23.666 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.666 [2024-04-16 12:32:22.655761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.924 [2024-04-16 12:32:22.770898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.924 [2024-04-16 12:32:22.771005] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:04:24.856 test_start 00:04:24.856 test_end 00:04:24.856 Performance: 356974 events per second 00:04:24.856 00:04:24.856 real 0m1.321s 00:04:24.856 user 0m1.226s 00:04:24.856 sys 0m0.090s 00:04:24.856 12:32:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.856 12:32:23 -- common/autotest_common.sh@10 -- # set +x 00:04:24.856 ************************************ 00:04:24.856 END TEST event_reactor_perf 00:04:24.856 ************************************ 00:04:24.856 12:32:23 -- event/event.sh@49 -- # uname -s 00:04:24.856 12:32:23 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:24.856 12:32:23 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:24.856 12:32:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.856 12:32:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.856 12:32:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.115 ************************************ 00:04:25.115 START TEST event_scheduler 00:04:25.115 ************************************ 00:04:25.115 12:32:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:25.115 * Looking for test storage... 00:04:25.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:25.115 12:32:24 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:25.115 12:32:24 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1056991 00:04:25.115 12:32:24 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:25.115 12:32:24 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.115 12:32:24 -- scheduler/scheduler.sh@37 -- # waitforlisten 1056991 00:04:25.115 12:32:24 -- common/autotest_common.sh@817 -- # '[' -z 1056991 ']' 00:04:25.115 12:32:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.115 12:32:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:25.115 12:32:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.115 12:32:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:25.115 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.115 [2024-04-16 12:32:24.114435] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:25.115 [2024-04-16 12:32:24.114509] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056991 ] 00:04:25.115 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.372 [2024-04-16 12:32:24.185775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.372 [2024-04-16 12:32:24.294240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.372 [2024-04-16 12:32:24.294296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.372 [2024-04-16 12:32:24.294362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.372 [2024-04-16 12:32:24.294365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.372 12:32:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:25.373 12:32:24 -- common/autotest_common.sh@850 -- # return 0 00:04:25.373 12:32:24 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:25.373 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.373 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.373 POWER: Env isn't set yet! 00:04:25.373 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:25.373 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:25.373 POWER: Cannot get available frequencies of lcore 0 00:04:25.373 POWER: Attempting to initialise PSTAT power management... 00:04:25.373 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:25.373 POWER: Initialized successfully for lcore 0 power management 00:04:25.373 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:25.373 POWER: Initialized successfully for lcore 1 power management 00:04:25.373 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:25.373 POWER: Initialized successfully for lcore 2 power management 00:04:25.373 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:25.373 POWER: Initialized successfully for lcore 3 power management 00:04:25.373 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.373 12:32:24 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:25.373 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.373 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 [2024-04-16 12:32:24.458475] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:25.632 12:32:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.632 12:32:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 ************************************ 00:04:25.632 START TEST scheduler_create_thread 00:04:25.632 ************************************ 00:04:25.632 12:32:24 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 2 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 3 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 4 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 5 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 6 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 7 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 8 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 9 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 10 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:25.632 12:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:25.632 12:32:24 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:25.632 12:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.632 12:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:26.565 12:32:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:26.565 12:32:25 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:26.565 12:32:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:26.565 12:32:25 -- common/autotest_common.sh@10 -- # set +x 00:04:27.935 12:32:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:27.935 12:32:26 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:27.935 12:32:26 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:27.935 12:32:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:27.935 12:32:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.305 12:32:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.305 00:04:29.305 real 0m3.380s 00:04:29.305 user 0m0.013s 00:04:29.305 sys 0m0.002s 00:04:29.305 12:32:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.305 12:32:27 -- common/autotest_common.sh@10 -- # set +x 00:04:29.305 ************************************ 00:04:29.305 END TEST scheduler_create_thread 00:04:29.305 ************************************ 00:04:29.305 12:32:27 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:29.305 12:32:27 -- scheduler/scheduler.sh@46 -- # killprocess 1056991 00:04:29.305 12:32:27 -- common/autotest_common.sh@936 -- # '[' -z 1056991 ']' 00:04:29.305 12:32:27 -- common/autotest_common.sh@940 -- # kill -0 1056991 00:04:29.305 12:32:27 -- common/autotest_common.sh@941 -- # uname 00:04:29.305 12:32:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.305 12:32:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1056991 00:04:29.305 12:32:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:29.305 12:32:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:29.305 12:32:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1056991' 00:04:29.305 killing process with pid 1056991 00:04:29.305 12:32:27 -- common/autotest_common.sh@955 -- # kill 1056991 00:04:29.305 12:32:27 -- common/autotest_common.sh@960 -- # wait 1056991 00:04:29.305 [2024-04-16 12:32:28.319588] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:29.564 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:29.564 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:29.564 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:29.564 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:29.564 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:29.564 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:29.564 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:29.564 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:29.564 00:04:29.564 real 0m4.612s 00:04:29.564 user 0m8.124s 00:04:29.564 sys 0m0.386s 00:04:29.564 12:32:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.564 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:04:29.564 ************************************ 00:04:29.564 END TEST event_scheduler 00:04:29.564 ************************************ 00:04:29.823 12:32:28 -- event/event.sh@51 -- # modprobe -n nbd 00:04:29.823 12:32:28 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:29.823 12:32:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.823 12:32:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.823 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:04:29.823 ************************************ 00:04:29.823 START TEST app_repeat 00:04:29.823 ************************************ 00:04:29.823 12:32:28 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:29.823 12:32:28 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.823 12:32:28 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.823 12:32:28 -- event/event.sh@13 -- # local nbd_list 00:04:29.823 12:32:28 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.823 12:32:28 -- event/event.sh@14 -- # local bdev_list 00:04:29.823 12:32:28 -- event/event.sh@15 -- # local repeat_times=4 00:04:29.823 12:32:28 -- event/event.sh@17 -- # modprobe nbd 00:04:29.823 12:32:28 -- event/event.sh@19 -- # repeat_pid=1057593 00:04:29.823 12:32:28 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:29.823 12:32:28 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.823 12:32:28 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1057593' 00:04:29.823 Process app_repeat pid: 1057593 00:04:29.823 12:32:28 -- event/event.sh@23 -- # for i in {0..2} 00:04:29.823 12:32:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:29.823 spdk_app_start Round 0 00:04:29.823 12:32:28 -- event/event.sh@25 -- # waitforlisten 1057593 /var/tmp/spdk-nbd.sock 00:04:29.823 12:32:28 -- common/autotest_common.sh@817 -- # '[' -z 1057593 ']' 00:04:29.823 12:32:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.823 12:32:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:29.823 12:32:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.823 12:32:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:29.823 12:32:28 -- common/autotest_common.sh@10 -- # set +x 00:04:29.823 [2024-04-16 12:32:28.756900] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:29.823 [2024-04-16 12:32:28.756975] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057593 ] 00:04:29.823 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.823 [2024-04-16 12:32:28.824095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.081 [2024-04-16 12:32:28.933113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.081 [2024-04-16 12:32:28.933117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.081 12:32:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:30.081 12:32:29 -- common/autotest_common.sh@850 -- # return 0 00:04:30.081 12:32:29 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.339 Malloc0 00:04:30.339 12:32:29 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.597 Malloc1 00:04:30.597 12:32:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@12 -- # local i 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.597 12:32:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.855 /dev/nbd0 00:04:30.855 12:32:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.855 12:32:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.855 12:32:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:30.855 12:32:29 -- common/autotest_common.sh@855 -- # local i 00:04:30.855 12:32:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:30.855 12:32:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:30.855 12:32:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:30.855 12:32:29 -- common/autotest_common.sh@859 -- # break 00:04:30.855 12:32:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:30.855 12:32:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:30.855 12:32:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.855 1+0 records in 00:04:30.855 1+0 records out 00:04:30.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155348 s, 26.4 MB/s 00:04:30.855 12:32:29 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.855 12:32:29 -- common/autotest_common.sh@872 -- # size=4096 00:04:30.855 12:32:29 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.855 12:32:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:30.855 12:32:29 -- common/autotest_common.sh@875 -- # return 0 00:04:30.855 12:32:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.855 12:32:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.855 12:32:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.112 /dev/nbd1 00:04:31.112 12:32:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.112 12:32:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.112 12:32:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:31.112 12:32:30 -- common/autotest_common.sh@855 -- # local i 00:04:31.112 12:32:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:31.112 12:32:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:31.112 12:32:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:31.112 12:32:30 -- common/autotest_common.sh@859 -- # break 00:04:31.113 12:32:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:31.113 12:32:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:31.113 12:32:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.113 1+0 records in 00:04:31.113 1+0 records out 00:04:31.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212987 s, 19.2 MB/s 00:04:31.113 12:32:30 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.113 12:32:30 -- common/autotest_common.sh@872 -- # size=4096 00:04:31.113 12:32:30 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.113 12:32:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:31.113 12:32:30 -- common/autotest_common.sh@875 -- # return 0 00:04:31.113 12:32:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.113 12:32:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.113 12:32:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.113 12:32:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.113 12:32:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.371 { 00:04:31.371 "nbd_device": "/dev/nbd0", 00:04:31.371 "bdev_name": "Malloc0" 00:04:31.371 }, 00:04:31.371 { 00:04:31.371 "nbd_device": "/dev/nbd1", 00:04:31.371 "bdev_name": "Malloc1" 00:04:31.371 } 00:04:31.371 ]' 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.371 { 00:04:31.371 "nbd_device": "/dev/nbd0", 00:04:31.371 "bdev_name": "Malloc0" 00:04:31.371 }, 00:04:31.371 { 00:04:31.371 "nbd_device": "/dev/nbd1", 00:04:31.371 "bdev_name": "Malloc1" 00:04:31.371 } 00:04:31.371 ]' 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.371 /dev/nbd1' 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.371 /dev/nbd1' 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.371 256+0 records in 00:04:31.371 256+0 records out 00:04:31.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439538 s, 239 MB/s 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.371 256+0 records in 00:04:31.371 256+0 records out 00:04:31.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240931 s, 43.5 MB/s 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.371 12:32:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.629 256+0 records in 00:04:31.629 256+0 records out 00:04:31.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257016 s, 40.8 MB/s 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@51 -- # local i 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.629 12:32:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@41 -- # break 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.888 12:32:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.146 12:32:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.146 12:32:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@41 -- # break 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.146 12:32:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@65 -- # true 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.403 12:32:31 -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.403 12:32:31 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.666 12:32:31 -- event/event.sh@35 -- # sleep 3 00:04:32.924 [2024-04-16 12:32:31.818187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.924 [2024-04-16 12:32:31.930883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.924 [2024-04-16 12:32:31.930889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.924 [2024-04-16 12:32:31.992802] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.924 [2024-04-16 12:32:31.992905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.205 12:32:34 -- event/event.sh@23 -- # for i in {0..2} 00:04:36.205 12:32:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:36.205 spdk_app_start Round 1 00:04:36.205 12:32:34 -- event/event.sh@25 -- # waitforlisten 1057593 /var/tmp/spdk-nbd.sock 00:04:36.205 12:32:34 -- common/autotest_common.sh@817 -- # '[' -z 1057593 ']' 00:04:36.205 12:32:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.205 12:32:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.205 12:32:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.205 12:32:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.205 12:32:34 -- common/autotest_common.sh@10 -- # set +x 00:04:36.205 12:32:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:36.205 12:32:34 -- common/autotest_common.sh@850 -- # return 0 00:04:36.205 12:32:34 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.205 Malloc0 00:04:36.205 12:32:35 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.463 Malloc1 00:04:36.463 12:32:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.463 12:32:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@12 -- # local i 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.464 12:32:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:36.721 /dev/nbd0 00:04:36.721 12:32:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.721 12:32:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.721 12:32:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:36.721 12:32:35 -- common/autotest_common.sh@855 -- # local i 00:04:36.721 12:32:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:36.721 12:32:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:36.721 12:32:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:36.721 12:32:35 -- common/autotest_common.sh@859 -- # break 00:04:36.721 12:32:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:36.721 12:32:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:36.722 12:32:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.722 1+0 records in 00:04:36.722 1+0 records out 00:04:36.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217148 s, 18.9 MB/s 00:04:36.722 12:32:35 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.722 12:32:35 -- common/autotest_common.sh@872 -- # size=4096 00:04:36.722 12:32:35 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.722 12:32:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:36.722 12:32:35 -- common/autotest_common.sh@875 -- # return 0 00:04:36.722 12:32:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.722 12:32:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.722 12:32:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.979 /dev/nbd1 00:04:36.979 12:32:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.979 12:32:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.979 12:32:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:36.979 12:32:35 -- common/autotest_common.sh@855 -- # local i 00:04:36.979 12:32:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:36.979 12:32:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:36.979 12:32:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:36.979 12:32:35 -- common/autotest_common.sh@859 -- # break 00:04:36.979 12:32:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:36.979 12:32:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:36.979 12:32:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.980 1+0 records in 00:04:36.980 1+0 records out 00:04:36.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201071 s, 20.4 MB/s 00:04:36.980 12:32:35 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.980 12:32:35 -- common/autotest_common.sh@872 -- # size=4096 00:04:36.980 12:32:35 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.980 12:32:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:36.980 12:32:35 -- common/autotest_common.sh@875 -- # return 0 00:04:36.980 12:32:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.980 12:32:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.980 12:32:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.980 12:32:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.980 12:32:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:37.236 { 00:04:37.236 "nbd_device": "/dev/nbd0", 00:04:37.236 "bdev_name": "Malloc0" 00:04:37.236 }, 00:04:37.236 { 00:04:37.236 "nbd_device": "/dev/nbd1", 00:04:37.236 "bdev_name": "Malloc1" 00:04:37.236 } 00:04:37.236 ]' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:37.236 { 00:04:37.236 "nbd_device": "/dev/nbd0", 00:04:37.236 "bdev_name": "Malloc0" 00:04:37.236 }, 00:04:37.236 { 00:04:37.236 "nbd_device": "/dev/nbd1", 00:04:37.236 "bdev_name": "Malloc1" 00:04:37.236 } 00:04:37.236 ]' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:37.236 /dev/nbd1' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:37.236 /dev/nbd1' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@65 -- # count=2 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@95 -- # count=2 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:37.236 256+0 records in 00:04:37.236 256+0 records out 00:04:37.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507539 s, 207 MB/s 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:37.236 256+0 records in 00:04:37.236 256+0 records out 00:04:37.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237404 s, 44.2 MB/s 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:37.236 256+0 records in 00:04:37.236 256+0 records out 00:04:37.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255451 s, 41.0 MB/s 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@51 -- # local i 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.236 12:32:36 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@41 -- # break 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.494 12:32:36 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@41 -- # break 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.752 12:32:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.010 12:32:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.010 12:32:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.010 12:32:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@65 -- # true 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.010 12:32:37 -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.010 12:32:37 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:38.268 12:32:37 -- event/event.sh@35 -- # sleep 3 00:04:38.527 [2024-04-16 12:32:37.556509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.784 [2024-04-16 12:32:37.671101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.784 [2024-04-16 12:32:37.671107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.784 [2024-04-16 12:32:37.727416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.784 [2024-04-16 12:32:37.727489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:41.312 12:32:40 -- event/event.sh@23 -- # for i in {0..2} 00:04:41.312 12:32:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:41.312 spdk_app_start Round 2 00:04:41.312 12:32:40 -- event/event.sh@25 -- # waitforlisten 1057593 /var/tmp/spdk-nbd.sock 00:04:41.312 12:32:40 -- common/autotest_common.sh@817 -- # '[' -z 1057593 ']' 00:04:41.312 12:32:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.312 12:32:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:41.312 12:32:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.312 12:32:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:41.312 12:32:40 -- common/autotest_common.sh@10 -- # set +x 00:04:41.570 12:32:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.570 12:32:40 -- common/autotest_common.sh@850 -- # return 0 00:04:41.570 12:32:40 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.828 Malloc0 00:04:41.828 12:32:40 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.087 Malloc1 00:04:42.087 12:32:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@12 -- # local i 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.087 12:32:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.350 /dev/nbd0 00:04:42.350 12:32:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.350 12:32:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.350 12:32:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:42.350 12:32:41 -- common/autotest_common.sh@855 -- # local i 00:04:42.350 12:32:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:42.350 12:32:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:42.350 12:32:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:42.350 12:32:41 -- common/autotest_common.sh@859 -- # break 00:04:42.350 12:32:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:42.350 12:32:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:42.350 12:32:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.350 1+0 records in 00:04:42.350 1+0 records out 00:04:42.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168026 s, 24.4 MB/s 00:04:42.350 12:32:41 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.350 12:32:41 -- common/autotest_common.sh@872 -- # size=4096 00:04:42.350 12:32:41 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.350 12:32:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:42.350 12:32:41 -- common/autotest_common.sh@875 -- # return 0 00:04:42.350 12:32:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.350 12:32:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.350 12:32:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.608 /dev/nbd1 00:04:42.608 12:32:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.608 12:32:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.608 12:32:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:42.608 12:32:41 -- common/autotest_common.sh@855 -- # local i 00:04:42.608 12:32:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:42.608 12:32:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:42.608 12:32:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:42.608 12:32:41 -- common/autotest_common.sh@859 -- # break 00:04:42.608 12:32:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:42.608 12:32:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:42.608 12:32:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.608 1+0 records in 00:04:42.608 1+0 records out 00:04:42.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216023 s, 19.0 MB/s 00:04:42.608 12:32:41 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.608 12:32:41 -- common/autotest_common.sh@872 -- # size=4096 00:04:42.608 12:32:41 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.608 12:32:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:42.608 12:32:41 -- common/autotest_common.sh@875 -- # return 0 00:04:42.609 12:32:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.609 12:32:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.609 12:32:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.609 12:32:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.609 12:32:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.866 12:32:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:42.866 { 00:04:42.866 "nbd_device": "/dev/nbd0", 00:04:42.866 "bdev_name": "Malloc0" 00:04:42.866 }, 00:04:42.866 { 00:04:42.866 "nbd_device": "/dev/nbd1", 00:04:42.866 "bdev_name": "Malloc1" 00:04:42.867 } 00:04:42.867 ]' 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:42.867 { 00:04:42.867 "nbd_device": "/dev/nbd0", 00:04:42.867 "bdev_name": "Malloc0" 00:04:42.867 }, 00:04:42.867 { 00:04:42.867 "nbd_device": "/dev/nbd1", 00:04:42.867 "bdev_name": "Malloc1" 00:04:42.867 } 00:04:42.867 ]' 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:42.867 /dev/nbd1' 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:42.867 /dev/nbd1' 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@65 -- # count=2 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@95 -- # count=2 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:42.867 256+0 records in 00:04:42.867 256+0 records out 00:04:42.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429385 s, 244 MB/s 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.867 12:32:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.125 256+0 records in 00:04:43.125 256+0 records out 00:04:43.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216077 s, 48.5 MB/s 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.125 256+0 records in 00:04:43.125 256+0 records out 00:04:43.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250595 s, 41.8 MB/s 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@51 -- # local i 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.125 12:32:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@41 -- # break 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.383 12:32:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@41 -- # break 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.641 12:32:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@65 -- # true 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.899 12:32:42 -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.899 12:32:42 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.157 12:32:43 -- event/event.sh@35 -- # sleep 3 00:04:44.415 [2024-04-16 12:32:43.347535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.415 [2024-04-16 12:32:43.459352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.415 [2024-04-16 12:32:43.459357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.673 [2024-04-16 12:32:43.521320] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.673 [2024-04-16 12:32:43.521404] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.199 12:32:46 -- event/event.sh@38 -- # waitforlisten 1057593 /var/tmp/spdk-nbd.sock 00:04:47.199 12:32:46 -- common/autotest_common.sh@817 -- # '[' -z 1057593 ']' 00:04:47.199 12:32:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.199 12:32:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:47.199 12:32:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.199 12:32:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:47.199 12:32:46 -- common/autotest_common.sh@10 -- # set +x 00:04:47.457 12:32:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:47.457 12:32:46 -- common/autotest_common.sh@850 -- # return 0 00:04:47.457 12:32:46 -- event/event.sh@39 -- # killprocess 1057593 00:04:47.457 12:32:46 -- common/autotest_common.sh@936 -- # '[' -z 1057593 ']' 00:04:47.457 12:32:46 -- common/autotest_common.sh@940 -- # kill -0 1057593 00:04:47.457 12:32:46 -- common/autotest_common.sh@941 -- # uname 00:04:47.457 12:32:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.457 12:32:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1057593 00:04:47.457 12:32:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.457 12:32:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.457 12:32:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1057593' 00:04:47.457 killing process with pid 1057593 00:04:47.457 12:32:46 -- common/autotest_common.sh@955 -- # kill 1057593 00:04:47.457 12:32:46 -- common/autotest_common.sh@960 -- # wait 1057593 00:04:47.715 spdk_app_start is called in Round 0. 00:04:47.715 Shutdown signal received, stop current app iteration 00:04:47.715 Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 reinitialization... 00:04:47.715 spdk_app_start is called in Round 1. 00:04:47.715 Shutdown signal received, stop current app iteration 00:04:47.715 Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 reinitialization... 00:04:47.715 spdk_app_start is called in Round 2. 00:04:47.715 Shutdown signal received, stop current app iteration 00:04:47.715 Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 reinitialization... 00:04:47.715 spdk_app_start is called in Round 3. 00:04:47.715 Shutdown signal received, stop current app iteration 00:04:47.716 12:32:46 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:47.716 12:32:46 -- event/event.sh@42 -- # return 0 00:04:47.716 00:04:47.716 real 0m17.860s 00:04:47.716 user 0m38.492s 00:04:47.716 sys 0m3.231s 00:04:47.716 12:32:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.716 12:32:46 -- common/autotest_common.sh@10 -- # set +x 00:04:47.716 ************************************ 00:04:47.716 END TEST app_repeat 00:04:47.716 ************************************ 00:04:47.716 12:32:46 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:47.716 12:32:46 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:47.716 12:32:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.716 12:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.716 12:32:46 -- common/autotest_common.sh@10 -- # set +x 00:04:47.716 ************************************ 00:04:47.716 START TEST cpu_locks 00:04:47.716 ************************************ 00:04:47.716 12:32:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:47.716 * Looking for test storage... 00:04:47.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:47.716 12:32:46 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:47.716 12:32:46 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:47.716 12:32:46 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:47.716 12:32:46 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:47.716 12:32:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.716 12:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.716 12:32:46 -- common/autotest_common.sh@10 -- # set +x 00:04:47.974 ************************************ 00:04:47.974 START TEST default_locks 00:04:47.974 ************************************ 00:04:47.974 12:32:46 -- common/autotest_common.sh@1111 -- # default_locks 00:04:47.974 12:32:46 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1059960 00:04:47.974 12:32:46 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.974 12:32:46 -- event/cpu_locks.sh@47 -- # waitforlisten 1059960 00:04:47.974 12:32:46 -- common/autotest_common.sh@817 -- # '[' -z 1059960 ']' 00:04:47.974 12:32:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.974 12:32:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:47.974 12:32:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.974 12:32:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:47.974 12:32:46 -- common/autotest_common.sh@10 -- # set +x 00:04:47.974 [2024-04-16 12:32:46.894649] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:47.974 [2024-04-16 12:32:46.894730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059960 ] 00:04:47.974 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.974 [2024-04-16 12:32:46.961274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.232 [2024-04-16 12:32:47.068403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.499 12:32:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:48.499 12:32:47 -- common/autotest_common.sh@850 -- # return 0 00:04:48.499 12:32:47 -- event/cpu_locks.sh@49 -- # locks_exist 1059960 00:04:48.499 12:32:47 -- event/cpu_locks.sh@22 -- # lslocks -p 1059960 00:04:48.499 12:32:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.759 lslocks: write error 00:04:48.759 12:32:47 -- event/cpu_locks.sh@50 -- # killprocess 1059960 00:04:48.759 12:32:47 -- common/autotest_common.sh@936 -- # '[' -z 1059960 ']' 00:04:48.759 12:32:47 -- common/autotest_common.sh@940 -- # kill -0 1059960 00:04:48.759 12:32:47 -- common/autotest_common.sh@941 -- # uname 00:04:48.759 12:32:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:48.759 12:32:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1059960 00:04:48.759 12:32:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:48.759 12:32:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:48.759 12:32:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1059960' 00:04:48.759 killing process with pid 1059960 00:04:48.759 12:32:47 -- common/autotest_common.sh@955 -- # kill 1059960 00:04:48.759 12:32:47 -- common/autotest_common.sh@960 -- # wait 1059960 00:04:49.325 12:32:48 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1059960 00:04:49.325 12:32:48 -- common/autotest_common.sh@638 -- # local es=0 00:04:49.325 12:32:48 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1059960 00:04:49.325 12:32:48 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:49.325 12:32:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:49.325 12:32:48 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:49.325 12:32:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:49.325 12:32:48 -- common/autotest_common.sh@641 -- # waitforlisten 1059960 00:04:49.325 12:32:48 -- common/autotest_common.sh@817 -- # '[' -z 1059960 ']' 00:04:49.325 12:32:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.325 12:32:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:49.325 12:32:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.325 12:32:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:49.325 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:49.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1059960) - No such process 00:04:49.325 ERROR: process (pid: 1059960) is no longer running 00:04:49.325 12:32:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:49.325 12:32:48 -- common/autotest_common.sh@850 -- # return 1 00:04:49.325 12:32:48 -- common/autotest_common.sh@641 -- # es=1 00:04:49.325 12:32:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:49.325 12:32:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:49.325 12:32:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:49.325 12:32:48 -- event/cpu_locks.sh@54 -- # no_locks 00:04:49.325 12:32:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.325 12:32:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.325 12:32:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.325 00:04:49.325 real 0m1.318s 00:04:49.325 user 0m1.240s 00:04:49.325 sys 0m0.544s 00:04:49.325 12:32:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.325 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:49.325 ************************************ 00:04:49.325 END TEST default_locks 00:04:49.325 ************************************ 00:04:49.325 12:32:48 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:49.325 12:32:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.325 12:32:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.325 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:49.325 ************************************ 00:04:49.325 START TEST default_locks_via_rpc 00:04:49.326 ************************************ 00:04:49.326 12:32:48 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:04:49.326 12:32:48 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1060131 00:04:49.326 12:32:48 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.326 12:32:48 -- event/cpu_locks.sh@63 -- # waitforlisten 1060131 00:04:49.326 12:32:48 -- common/autotest_common.sh@817 -- # '[' -z 1060131 ']' 00:04:49.326 12:32:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.326 12:32:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:49.326 12:32:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.326 12:32:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:49.326 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:49.326 [2024-04-16 12:32:48.333882] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:49.326 [2024-04-16 12:32:48.333968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060131 ] 00:04:49.326 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.584 [2024-04-16 12:32:48.400597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.584 [2024-04-16 12:32:48.507464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.842 12:32:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:49.842 12:32:48 -- common/autotest_common.sh@850 -- # return 0 00:04:49.842 12:32:48 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:49.842 12:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.842 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:49.842 12:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.842 12:32:48 -- event/cpu_locks.sh@67 -- # no_locks 00:04:49.842 12:32:48 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.842 12:32:48 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.842 12:32:48 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.842 12:32:48 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:49.842 12:32:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.842 12:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:49.842 12:32:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.842 12:32:48 -- event/cpu_locks.sh@71 -- # locks_exist 1060131 00:04:49.842 12:32:48 -- event/cpu_locks.sh@22 -- # lslocks -p 1060131 00:04:49.842 12:32:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.099 12:32:49 -- event/cpu_locks.sh@73 -- # killprocess 1060131 00:04:50.099 12:32:49 -- common/autotest_common.sh@936 -- # '[' -z 1060131 ']' 00:04:50.099 12:32:49 -- common/autotest_common.sh@940 -- # kill -0 1060131 00:04:50.099 12:32:49 -- common/autotest_common.sh@941 -- # uname 00:04:50.099 12:32:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:50.099 12:32:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1060131 00:04:50.099 12:32:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:50.099 12:32:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:50.099 12:32:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1060131' 00:04:50.099 killing process with pid 1060131 00:04:50.099 12:32:49 -- common/autotest_common.sh@955 -- # kill 1060131 00:04:50.099 12:32:49 -- common/autotest_common.sh@960 -- # wait 1060131 00:04:50.665 00:04:50.665 real 0m1.276s 00:04:50.665 user 0m1.191s 00:04:50.665 sys 0m0.539s 00:04:50.665 12:32:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.665 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:04:50.665 ************************************ 00:04:50.665 END TEST default_locks_via_rpc 00:04:50.665 ************************************ 00:04:50.665 12:32:49 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:50.665 12:32:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.665 12:32:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.665 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:04:50.665 ************************************ 00:04:50.665 START TEST non_locking_app_on_locked_coremask 00:04:50.665 ************************************ 00:04:50.665 12:32:49 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:04:50.665 12:32:49 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1060299 00:04:50.665 12:32:49 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.665 12:32:49 -- event/cpu_locks.sh@81 -- # waitforlisten 1060299 /var/tmp/spdk.sock 00:04:50.665 12:32:49 -- common/autotest_common.sh@817 -- # '[' -z 1060299 ']' 00:04:50.665 12:32:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.665 12:32:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:50.665 12:32:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.665 12:32:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:50.665 12:32:49 -- common/autotest_common.sh@10 -- # set +x 00:04:50.924 [2024-04-16 12:32:49.740059] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:50.924 [2024-04-16 12:32:49.740135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060299 ] 00:04:50.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.924 [2024-04-16 12:32:49.811479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.924 [2024-04-16 12:32:49.918240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.189 12:32:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:51.189 12:32:50 -- common/autotest_common.sh@850 -- # return 0 00:04:51.189 12:32:50 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1060432 00:04:51.189 12:32:50 -- event/cpu_locks.sh@85 -- # waitforlisten 1060432 /var/tmp/spdk2.sock 00:04:51.189 12:32:50 -- common/autotest_common.sh@817 -- # '[' -z 1060432 ']' 00:04:51.189 12:32:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.189 12:32:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.189 12:32:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.189 12:32:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.189 12:32:50 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:51.189 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:04:51.189 [2024-04-16 12:32:50.222334] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:51.189 [2024-04-16 12:32:50.222417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060432 ] 00:04:51.477 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.477 [2024-04-16 12:32:50.331653] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:51.477 [2024-04-16 12:32:50.331706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.736 [2024-04-16 12:32:50.564310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.301 12:32:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:52.301 12:32:51 -- common/autotest_common.sh@850 -- # return 0 00:04:52.301 12:32:51 -- event/cpu_locks.sh@87 -- # locks_exist 1060299 00:04:52.301 12:32:51 -- event/cpu_locks.sh@22 -- # lslocks -p 1060299 00:04:52.301 12:32:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.559 lslocks: write error 00:04:52.559 12:32:51 -- event/cpu_locks.sh@89 -- # killprocess 1060299 00:04:52.559 12:32:51 -- common/autotest_common.sh@936 -- # '[' -z 1060299 ']' 00:04:52.559 12:32:51 -- common/autotest_common.sh@940 -- # kill -0 1060299 00:04:52.559 12:32:51 -- common/autotest_common.sh@941 -- # uname 00:04:52.559 12:32:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.559 12:32:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1060299 00:04:52.559 12:32:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:52.559 12:32:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:52.559 12:32:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1060299' 00:04:52.559 killing process with pid 1060299 00:04:52.559 12:32:51 -- common/autotest_common.sh@955 -- # kill 1060299 00:04:52.559 12:32:51 -- common/autotest_common.sh@960 -- # wait 1060299 00:04:53.493 12:32:52 -- event/cpu_locks.sh@90 -- # killprocess 1060432 00:04:53.493 12:32:52 -- common/autotest_common.sh@936 -- # '[' -z 1060432 ']' 00:04:53.493 12:32:52 -- common/autotest_common.sh@940 -- # kill -0 1060432 00:04:53.493 12:32:52 -- common/autotest_common.sh@941 -- # uname 00:04:53.493 12:32:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.493 12:32:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1060432 00:04:53.493 12:32:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:53.493 12:32:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:53.493 12:32:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1060432' 00:04:53.493 killing process with pid 1060432 00:04:53.493 12:32:52 -- common/autotest_common.sh@955 -- # kill 1060432 00:04:53.493 12:32:52 -- common/autotest_common.sh@960 -- # wait 1060432 00:04:54.058 00:04:54.058 real 0m3.278s 00:04:54.058 user 0m3.357s 00:04:54.058 sys 0m1.071s 00:04:54.058 12:32:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.058 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.058 ************************************ 00:04:54.058 END TEST non_locking_app_on_locked_coremask 00:04:54.058 ************************************ 00:04:54.058 12:32:52 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:54.058 12:32:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.058 12:32:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.058 12:32:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.058 ************************************ 00:04:54.058 START TEST locking_app_on_unlocked_coremask 00:04:54.058 ************************************ 00:04:54.058 12:32:53 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:04:54.058 12:32:53 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1060746 00:04:54.058 12:32:53 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:54.058 12:32:53 -- event/cpu_locks.sh@99 -- # waitforlisten 1060746 /var/tmp/spdk.sock 00:04:54.058 12:32:53 -- common/autotest_common.sh@817 -- # '[' -z 1060746 ']' 00:04:54.058 12:32:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.058 12:32:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.058 12:32:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.058 12:32:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.058 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:04:54.316 [2024-04-16 12:32:53.139102] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:54.316 [2024-04-16 12:32:53.139176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060746 ] 00:04:54.316 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.316 [2024-04-16 12:32:53.206308] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.316 [2024-04-16 12:32:53.206349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.316 [2024-04-16 12:32:53.312314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.574 12:32:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:54.574 12:32:53 -- common/autotest_common.sh@850 -- # return 0 00:04:54.574 12:32:53 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1060873 00:04:54.574 12:32:53 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:54.574 12:32:53 -- event/cpu_locks.sh@103 -- # waitforlisten 1060873 /var/tmp/spdk2.sock 00:04:54.574 12:32:53 -- common/autotest_common.sh@817 -- # '[' -z 1060873 ']' 00:04:54.574 12:32:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.574 12:32:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.574 12:32:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.574 12:32:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.574 12:32:53 -- common/autotest_common.sh@10 -- # set +x 00:04:54.574 [2024-04-16 12:32:53.623960] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:54.574 [2024-04-16 12:32:53.624055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060873 ] 00:04:54.832 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.832 [2024-04-16 12:32:53.736880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.090 [2024-04-16 12:32:53.969284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.654 12:32:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.654 12:32:54 -- common/autotest_common.sh@850 -- # return 0 00:04:55.654 12:32:54 -- event/cpu_locks.sh@105 -- # locks_exist 1060873 00:04:55.654 12:32:54 -- event/cpu_locks.sh@22 -- # lslocks -p 1060873 00:04:55.654 12:32:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.912 lslocks: write error 00:04:55.912 12:32:54 -- event/cpu_locks.sh@107 -- # killprocess 1060746 00:04:55.912 12:32:54 -- common/autotest_common.sh@936 -- # '[' -z 1060746 ']' 00:04:55.912 12:32:54 -- common/autotest_common.sh@940 -- # kill -0 1060746 00:04:55.912 12:32:54 -- common/autotest_common.sh@941 -- # uname 00:04:56.170 12:32:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.170 12:32:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1060746 00:04:56.170 12:32:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.170 12:32:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.170 12:32:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1060746' 00:04:56.170 killing process with pid 1060746 00:04:56.170 12:32:55 -- common/autotest_common.sh@955 -- # kill 1060746 00:04:56.170 12:32:55 -- common/autotest_common.sh@960 -- # wait 1060746 00:04:57.103 12:32:55 -- event/cpu_locks.sh@108 -- # killprocess 1060873 00:04:57.103 12:32:55 -- common/autotest_common.sh@936 -- # '[' -z 1060873 ']' 00:04:57.103 12:32:55 -- common/autotest_common.sh@940 -- # kill -0 1060873 00:04:57.103 12:32:55 -- common/autotest_common.sh@941 -- # uname 00:04:57.103 12:32:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.103 12:32:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1060873 00:04:57.103 12:32:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.103 12:32:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.103 12:32:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1060873' 00:04:57.103 killing process with pid 1060873 00:04:57.103 12:32:55 -- common/autotest_common.sh@955 -- # kill 1060873 00:04:57.103 12:32:55 -- common/autotest_common.sh@960 -- # wait 1060873 00:04:57.361 00:04:57.361 real 0m3.340s 00:04:57.361 user 0m3.414s 00:04:57.361 sys 0m1.092s 00:04:57.361 12:32:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.362 12:32:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.362 ************************************ 00:04:57.362 END TEST locking_app_on_unlocked_coremask 00:04:57.362 ************************************ 00:04:57.619 12:32:56 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:57.619 12:32:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.619 12:32:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.619 12:32:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.619 ************************************ 00:04:57.619 START TEST locking_app_on_locked_coremask 00:04:57.619 ************************************ 00:04:57.619 12:32:56 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:04:57.619 12:32:56 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1061216 00:04:57.619 12:32:56 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.619 12:32:56 -- event/cpu_locks.sh@116 -- # waitforlisten 1061216 /var/tmp/spdk.sock 00:04:57.619 12:32:56 -- common/autotest_common.sh@817 -- # '[' -z 1061216 ']' 00:04:57.620 12:32:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.620 12:32:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:57.620 12:32:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.620 12:32:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:57.620 12:32:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.620 [2024-04-16 12:32:56.598395] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:57.620 [2024-04-16 12:32:56.598485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061216 ] 00:04:57.620 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.620 [2024-04-16 12:32:56.671179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.877 [2024-04-16 12:32:56.784330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.443 12:32:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.443 12:32:57 -- common/autotest_common.sh@850 -- # return 0 00:04:58.443 12:32:57 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1061325 00:04:58.443 12:32:57 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.443 12:32:57 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1061325 /var/tmp/spdk2.sock 00:04:58.443 12:32:57 -- common/autotest_common.sh@638 -- # local es=0 00:04:58.443 12:32:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1061325 /var/tmp/spdk2.sock 00:04:58.443 12:32:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:58.444 12:32:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.444 12:32:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:58.444 12:32:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.444 12:32:57 -- common/autotest_common.sh@641 -- # waitforlisten 1061325 /var/tmp/spdk2.sock 00:04:58.444 12:32:57 -- common/autotest_common.sh@817 -- # '[' -z 1061325 ']' 00:04:58.701 12:32:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.701 12:32:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.701 12:32:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.701 12:32:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.701 12:32:57 -- common/autotest_common.sh@10 -- # set +x 00:04:58.701 [2024-04-16 12:32:57.560397] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:04:58.701 [2024-04-16 12:32:57.560489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061325 ] 00:04:58.701 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.701 [2024-04-16 12:32:57.670686] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1061216 has claimed it. 00:04:58.701 [2024-04-16 12:32:57.670749] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1061325) - No such process 00:04:59.266 ERROR: process (pid: 1061325) is no longer running 00:04:59.266 12:32:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:59.266 12:32:58 -- common/autotest_common.sh@850 -- # return 1 00:04:59.266 12:32:58 -- common/autotest_common.sh@641 -- # es=1 00:04:59.266 12:32:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:59.266 12:32:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:59.266 12:32:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:59.266 12:32:58 -- event/cpu_locks.sh@122 -- # locks_exist 1061216 00:04:59.266 12:32:58 -- event/cpu_locks.sh@22 -- # lslocks -p 1061216 00:04:59.266 12:32:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.832 lslocks: write error 00:04:59.832 12:32:58 -- event/cpu_locks.sh@124 -- # killprocess 1061216 00:04:59.832 12:32:58 -- common/autotest_common.sh@936 -- # '[' -z 1061216 ']' 00:04:59.832 12:32:58 -- common/autotest_common.sh@940 -- # kill -0 1061216 00:04:59.832 12:32:58 -- common/autotest_common.sh@941 -- # uname 00:04:59.832 12:32:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.832 12:32:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1061216 00:04:59.832 12:32:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.832 12:32:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.832 12:32:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1061216' 00:04:59.832 killing process with pid 1061216 00:04:59.832 12:32:58 -- common/autotest_common.sh@955 -- # kill 1061216 00:04:59.832 12:32:58 -- common/autotest_common.sh@960 -- # wait 1061216 00:05:00.404 00:05:00.404 real 0m2.729s 00:05:00.404 user 0m3.018s 00:05:00.404 sys 0m0.747s 00:05:00.404 12:32:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.404 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 ************************************ 00:05:00.404 END TEST locking_app_on_locked_coremask 00:05:00.404 ************************************ 00:05:00.404 12:32:59 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:00.404 12:32:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.404 12:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.404 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:00.404 ************************************ 00:05:00.404 START TEST locking_overlapped_coremask 00:05:00.405 ************************************ 00:05:00.405 12:32:59 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:00.405 12:32:59 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1061625 00:05:00.405 12:32:59 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:00.405 12:32:59 -- event/cpu_locks.sh@133 -- # waitforlisten 1061625 /var/tmp/spdk.sock 00:05:00.405 12:32:59 -- common/autotest_common.sh@817 -- # '[' -z 1061625 ']' 00:05:00.405 12:32:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.405 12:32:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:00.405 12:32:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.405 12:32:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:00.405 12:32:59 -- common/autotest_common.sh@10 -- # set +x 00:05:00.405 [2024-04-16 12:32:59.444686] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:00.405 [2024-04-16 12:32:59.444777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061625 ] 00:05:00.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.663 [2024-04-16 12:32:59.516789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.663 [2024-04-16 12:32:59.631419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.663 [2024-04-16 12:32:59.631489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.663 [2024-04-16 12:32:59.631492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.596 12:33:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.596 12:33:00 -- common/autotest_common.sh@850 -- # return 0 00:05:01.596 12:33:00 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1061788 00:05:01.596 12:33:00 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:01.596 12:33:00 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1061788 /var/tmp/spdk2.sock 00:05:01.596 12:33:00 -- common/autotest_common.sh@638 -- # local es=0 00:05:01.596 12:33:00 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1061788 /var/tmp/spdk2.sock 00:05:01.596 12:33:00 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:01.596 12:33:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:01.596 12:33:00 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:01.596 12:33:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:01.596 12:33:00 -- common/autotest_common.sh@641 -- # waitforlisten 1061788 /var/tmp/spdk2.sock 00:05:01.596 12:33:00 -- common/autotest_common.sh@817 -- # '[' -z 1061788 ']' 00:05:01.596 12:33:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.596 12:33:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:01.596 12:33:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.596 12:33:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:01.596 12:33:00 -- common/autotest_common.sh@10 -- # set +x 00:05:01.596 [2024-04-16 12:33:00.430762] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:01.596 [2024-04-16 12:33:00.430851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061788 ] 00:05:01.596 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.596 [2024-04-16 12:33:00.541700] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1061625 has claimed it. 00:05:01.596 [2024-04-16 12:33:00.541755] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1061788) - No such process 00:05:02.161 ERROR: process (pid: 1061788) is no longer running 00:05:02.161 12:33:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:02.161 12:33:01 -- common/autotest_common.sh@850 -- # return 1 00:05:02.161 12:33:01 -- common/autotest_common.sh@641 -- # es=1 00:05:02.161 12:33:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:02.161 12:33:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:02.161 12:33:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:02.161 12:33:01 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:02.161 12:33:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.161 12:33:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.161 12:33:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.161 12:33:01 -- event/cpu_locks.sh@141 -- # killprocess 1061625 00:05:02.161 12:33:01 -- common/autotest_common.sh@936 -- # '[' -z 1061625 ']' 00:05:02.161 12:33:01 -- common/autotest_common.sh@940 -- # kill -0 1061625 00:05:02.161 12:33:01 -- common/autotest_common.sh@941 -- # uname 00:05:02.161 12:33:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.161 12:33:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1061625 00:05:02.161 12:33:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.161 12:33:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.161 12:33:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1061625' 00:05:02.161 killing process with pid 1061625 00:05:02.161 12:33:01 -- common/autotest_common.sh@955 -- # kill 1061625 00:05:02.161 12:33:01 -- common/autotest_common.sh@960 -- # wait 1061625 00:05:02.726 00:05:02.726 real 0m2.198s 00:05:02.726 user 0m6.127s 00:05:02.726 sys 0m0.530s 00:05:02.726 12:33:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.726 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.726 ************************************ 00:05:02.726 END TEST locking_overlapped_coremask 00:05:02.726 ************************************ 00:05:02.726 12:33:01 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:02.726 12:33:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.726 12:33:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.726 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.726 ************************************ 00:05:02.726 START TEST locking_overlapped_coremask_via_rpc 00:05:02.726 ************************************ 00:05:02.726 12:33:01 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:02.726 12:33:01 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1062032 00:05:02.726 12:33:01 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:02.726 12:33:01 -- event/cpu_locks.sh@149 -- # waitforlisten 1062032 /var/tmp/spdk.sock 00:05:02.726 12:33:01 -- common/autotest_common.sh@817 -- # '[' -z 1062032 ']' 00:05:02.726 12:33:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.726 12:33:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:02.726 12:33:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.726 12:33:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:02.726 12:33:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.726 [2024-04-16 12:33:01.748737] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:02.726 [2024-04-16 12:33:01.748821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062032 ] 00:05:02.726 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.984 [2024-04-16 12:33:01.820383] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.984 [2024-04-16 12:33:01.820429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.984 [2024-04-16 12:33:01.931856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.984 [2024-04-16 12:33:01.935598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.984 [2024-04-16 12:33:01.935603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.243 12:33:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.243 12:33:02 -- common/autotest_common.sh@850 -- # return 0 00:05:03.243 12:33:02 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1062113 00:05:03.243 12:33:02 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:03.243 12:33:02 -- event/cpu_locks.sh@153 -- # waitforlisten 1062113 /var/tmp/spdk2.sock 00:05:03.243 12:33:02 -- common/autotest_common.sh@817 -- # '[' -z 1062113 ']' 00:05:03.243 12:33:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.243 12:33:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.243 12:33:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.243 12:33:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.243 12:33:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.243 [2024-04-16 12:33:02.241469] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:03.243 [2024-04-16 12:33:02.241588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062113 ] 00:05:03.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.501 [2024-04-16 12:33:02.344324] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.501 [2024-04-16 12:33:02.344365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.501 [2024-04-16 12:33:02.561035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.501 [2024-04-16 12:33:02.564592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:03.501 [2024-04-16 12:33:02.564595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.435 12:33:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.435 12:33:03 -- common/autotest_common.sh@850 -- # return 0 00:05:04.435 12:33:03 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:04.435 12:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.435 12:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.435 12:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.435 12:33:03 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.435 12:33:03 -- common/autotest_common.sh@638 -- # local es=0 00:05:04.435 12:33:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.435 12:33:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:04.435 12:33:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.435 12:33:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:04.435 12:33:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:04.435 12:33:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.435 12:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.435 12:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.435 [2024-04-16 12:33:03.188666] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1062032 has claimed it. 00:05:04.435 request: 00:05:04.435 { 00:05:04.435 "method": "framework_enable_cpumask_locks", 00:05:04.435 "req_id": 1 00:05:04.435 } 00:05:04.435 Got JSON-RPC error response 00:05:04.435 response: 00:05:04.435 { 00:05:04.435 "code": -32603, 00:05:04.435 "message": "Failed to claim CPU core: 2" 00:05:04.435 } 00:05:04.435 12:33:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:04.435 12:33:03 -- common/autotest_common.sh@641 -- # es=1 00:05:04.435 12:33:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:04.435 12:33:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:04.435 12:33:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:04.435 12:33:03 -- event/cpu_locks.sh@158 -- # waitforlisten 1062032 /var/tmp/spdk.sock 00:05:04.435 12:33:03 -- common/autotest_common.sh@817 -- # '[' -z 1062032 ']' 00:05:04.435 12:33:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.435 12:33:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.435 12:33:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.435 12:33:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.435 12:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.435 12:33:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.435 12:33:03 -- common/autotest_common.sh@850 -- # return 0 00:05:04.435 12:33:03 -- event/cpu_locks.sh@159 -- # waitforlisten 1062113 /var/tmp/spdk2.sock 00:05:04.435 12:33:03 -- common/autotest_common.sh@817 -- # '[' -z 1062113 ']' 00:05:04.435 12:33:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.435 12:33:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.435 12:33:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.435 12:33:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.435 12:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.693 12:33:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.693 12:33:03 -- common/autotest_common.sh@850 -- # return 0 00:05:04.693 12:33:03 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:04.693 12:33:03 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:04.693 12:33:03 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:04.694 12:33:03 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:04.694 00:05:04.694 real 0m1.992s 00:05:04.694 user 0m1.018s 00:05:04.694 sys 0m0.185s 00:05:04.694 12:33:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.694 12:33:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.694 ************************************ 00:05:04.694 END TEST locking_overlapped_coremask_via_rpc 00:05:04.694 ************************************ 00:05:04.694 12:33:03 -- event/cpu_locks.sh@174 -- # cleanup 00:05:04.694 12:33:03 -- event/cpu_locks.sh@15 -- # [[ -z 1062032 ]] 00:05:04.694 12:33:03 -- event/cpu_locks.sh@15 -- # killprocess 1062032 00:05:04.694 12:33:03 -- common/autotest_common.sh@936 -- # '[' -z 1062032 ']' 00:05:04.694 12:33:03 -- common/autotest_common.sh@940 -- # kill -0 1062032 00:05:04.694 12:33:03 -- common/autotest_common.sh@941 -- # uname 00:05:04.694 12:33:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.694 12:33:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1062032 00:05:04.694 12:33:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.694 12:33:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.694 12:33:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1062032' 00:05:04.694 killing process with pid 1062032 00:05:04.694 12:33:03 -- common/autotest_common.sh@955 -- # kill 1062032 00:05:04.694 12:33:03 -- common/autotest_common.sh@960 -- # wait 1062032 00:05:05.260 12:33:04 -- event/cpu_locks.sh@16 -- # [[ -z 1062113 ]] 00:05:05.260 12:33:04 -- event/cpu_locks.sh@16 -- # killprocess 1062113 00:05:05.260 12:33:04 -- common/autotest_common.sh@936 -- # '[' -z 1062113 ']' 00:05:05.260 12:33:04 -- common/autotest_common.sh@940 -- # kill -0 1062113 00:05:05.260 12:33:04 -- common/autotest_common.sh@941 -- # uname 00:05:05.260 12:33:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.260 12:33:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1062113 00:05:05.260 12:33:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:05.260 12:33:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:05.260 12:33:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1062113' 00:05:05.260 killing process with pid 1062113 00:05:05.260 12:33:04 -- common/autotest_common.sh@955 -- # kill 1062113 00:05:05.260 12:33:04 -- common/autotest_common.sh@960 -- # wait 1062113 00:05:05.826 12:33:04 -- event/cpu_locks.sh@18 -- # rm -f 00:05:05.826 12:33:04 -- event/cpu_locks.sh@1 -- # cleanup 00:05:05.826 12:33:04 -- event/cpu_locks.sh@15 -- # [[ -z 1062032 ]] 00:05:05.826 12:33:04 -- event/cpu_locks.sh@15 -- # killprocess 1062032 00:05:05.826 12:33:04 -- common/autotest_common.sh@936 -- # '[' -z 1062032 ']' 00:05:05.826 12:33:04 -- common/autotest_common.sh@940 -- # kill -0 1062032 00:05:05.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1062032) - No such process 00:05:05.826 12:33:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1062032 is not found' 00:05:05.826 Process with pid 1062032 is not found 00:05:05.826 12:33:04 -- event/cpu_locks.sh@16 -- # [[ -z 1062113 ]] 00:05:05.826 12:33:04 -- event/cpu_locks.sh@16 -- # killprocess 1062113 00:05:05.826 12:33:04 -- common/autotest_common.sh@936 -- # '[' -z 1062113 ']' 00:05:05.826 12:33:04 -- common/autotest_common.sh@940 -- # kill -0 1062113 00:05:05.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1062113) - No such process 00:05:05.826 12:33:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1062113 is not found' 00:05:05.826 Process with pid 1062113 is not found 00:05:05.826 12:33:04 -- event/cpu_locks.sh@18 -- # rm -f 00:05:05.826 00:05:05.826 real 0m17.967s 00:05:05.826 user 0m30.416s 00:05:05.826 sys 0m5.882s 00:05:05.826 12:33:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.826 12:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:05.826 ************************************ 00:05:05.826 END TEST cpu_locks 00:05:05.826 ************************************ 00:05:05.826 00:05:05.826 real 0m45.148s 00:05:05.826 user 1m23.988s 00:05:05.827 sys 0m10.194s 00:05:05.827 12:33:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.827 12:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:05.827 ************************************ 00:05:05.827 END TEST event 00:05:05.827 ************************************ 00:05:05.827 12:33:04 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:05.827 12:33:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.827 12:33:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.827 12:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:05.827 ************************************ 00:05:05.827 START TEST thread 00:05:05.827 ************************************ 00:05:05.827 12:33:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:05.827 * Looking for test storage... 00:05:05.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:05.827 12:33:04 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:05.827 12:33:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:05.827 12:33:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.827 12:33:04 -- common/autotest_common.sh@10 -- # set +x 00:05:06.085 ************************************ 00:05:06.085 START TEST thread_poller_perf 00:05:06.085 ************************************ 00:05:06.085 12:33:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:06.085 [2024-04-16 12:33:04.985010] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:06.085 [2024-04-16 12:33:04.985072] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062557 ] 00:05:06.085 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.085 [2024-04-16 12:33:05.053830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.343 [2024-04-16 12:33:05.164582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.343 [2024-04-16 12:33:05.164673] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:06.343 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:07.277 ====================================== 00:05:07.277 busy:2714352612 (cyc) 00:05:07.277 total_run_count: 292000 00:05:07.277 tsc_hz: 2700000000 (cyc) 00:05:07.277 ====================================== 00:05:07.277 poller_cost: 9295 (cyc), 3442 (nsec) 00:05:07.277 00:05:07.277 real 0m1.318s 00:05:07.277 user 0m1.227s 00:05:07.277 sys 0m0.084s 00:05:07.277 12:33:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.277 12:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.277 ************************************ 00:05:07.277 END TEST thread_poller_perf 00:05:07.277 ************************************ 00:05:07.277 12:33:06 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.277 12:33:06 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:07.277 12:33:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.277 12:33:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.535 ************************************ 00:05:07.536 START TEST thread_poller_perf 00:05:07.536 ************************************ 00:05:07.536 12:33:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.536 [2024-04-16 12:33:06.413790] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:07.536 [2024-04-16 12:33:06.413865] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062728 ] 00:05:07.536 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.536 [2024-04-16 12:33:06.490786] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.794 [2024-04-16 12:33:06.609302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.794 [2024-04-16 12:33:06.609394] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:07.794 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:08.737 ====================================== 00:05:08.737 busy:2703062781 (cyc) 00:05:08.737 total_run_count: 3841000 00:05:08.737 tsc_hz: 2700000000 (cyc) 00:05:08.737 ====================================== 00:05:08.737 poller_cost: 703 (cyc), 260 (nsec) 00:05:08.737 00:05:08.737 real 0m1.330s 00:05:08.737 user 0m1.225s 00:05:08.737 sys 0m0.098s 00:05:08.737 12:33:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.737 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:08.737 ************************************ 00:05:08.737 END TEST thread_poller_perf 00:05:08.737 ************************************ 00:05:08.737 12:33:07 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:08.737 00:05:08.737 real 0m2.937s 00:05:08.737 user 0m2.556s 00:05:08.737 sys 0m0.355s 00:05:08.737 12:33:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.737 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:08.737 ************************************ 00:05:08.737 END TEST thread 00:05:08.737 ************************************ 00:05:08.737 12:33:07 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:08.737 12:33:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.737 12:33:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.737 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:09.034 ************************************ 00:05:09.034 START TEST accel 00:05:09.034 ************************************ 00:05:09.034 12:33:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:09.034 * Looking for test storage... 00:05:09.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:09.034 12:33:07 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:09.034 12:33:07 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:09.034 12:33:07 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:09.034 12:33:07 -- accel/accel.sh@62 -- # spdk_tgt_pid=1063291 00:05:09.034 12:33:07 -- accel/accel.sh@63 -- # waitforlisten 1063291 00:05:09.034 12:33:07 -- common/autotest_common.sh@817 -- # '[' -z 1063291 ']' 00:05:09.034 12:33:07 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:09.034 12:33:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.034 12:33:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.034 12:33:07 -- accel/accel.sh@61 -- # build_accel_config 00:05:09.034 12:33:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.034 12:33:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.034 12:33:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.034 12:33:07 -- common/autotest_common.sh@10 -- # set +x 00:05:09.034 12:33:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.034 12:33:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.034 12:33:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.034 12:33:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.034 12:33:07 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.034 12:33:07 -- accel/accel.sh@41 -- # jq -r . 00:05:09.034 [2024-04-16 12:33:07.972607] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:09.034 [2024-04-16 12:33:07.972711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063291 ] 00:05:09.034 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.034 [2024-04-16 12:33:08.041396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.312 [2024-04-16 12:33:08.147292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.570 12:33:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.570 12:33:08 -- common/autotest_common.sh@850 -- # return 0 00:05:09.570 12:33:08 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:09.570 12:33:08 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:09.570 12:33:08 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:09.570 12:33:08 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:09.570 12:33:08 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:09.570 12:33:08 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:09.570 12:33:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:09.570 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:05:09.570 12:33:08 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:09.570 12:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.570 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.570 12:33:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:09.570 12:33:08 -- accel/accel.sh@72 -- # IFS== 00:05:09.571 12:33:08 -- accel/accel.sh@72 -- # read -r opc module 00:05:09.571 12:33:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:09.571 12:33:08 -- accel/accel.sh@75 -- # killprocess 1063291 00:05:09.571 12:33:08 -- common/autotest_common.sh@936 -- # '[' -z 1063291 ']' 00:05:09.571 12:33:08 -- common/autotest_common.sh@940 -- # kill -0 1063291 00:05:09.571 12:33:08 -- common/autotest_common.sh@941 -- # uname 00:05:09.571 12:33:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.571 12:33:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1063291 00:05:09.571 12:33:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.571 12:33:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.571 12:33:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1063291' 00:05:09.571 killing process with pid 1063291 00:05:09.571 12:33:08 -- common/autotest_common.sh@955 -- # kill 1063291 00:05:09.571 12:33:08 -- common/autotest_common.sh@960 -- # wait 1063291 00:05:10.136 12:33:08 -- accel/accel.sh@76 -- # trap - ERR 00:05:10.136 12:33:08 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:10.136 12:33:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:10.136 12:33:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.136 12:33:08 -- common/autotest_common.sh@10 -- # set +x 00:05:10.136 12:33:09 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:10.136 12:33:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:10.136 12:33:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.136 12:33:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.136 12:33:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.136 12:33:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.136 12:33:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.136 12:33:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.136 12:33:09 -- accel/accel.sh@40 -- # local IFS=, 00:05:10.136 12:33:09 -- accel/accel.sh@41 -- # jq -r . 00:05:10.136 12:33:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.136 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.136 12:33:09 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:10.136 12:33:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:10.136 12:33:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.136 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.136 ************************************ 00:05:10.136 START TEST accel_missing_filename 00:05:10.136 ************************************ 00:05:10.136 12:33:09 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:10.136 12:33:09 -- common/autotest_common.sh@638 -- # local es=0 00:05:10.136 12:33:09 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:10.136 12:33:09 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:10.136 12:33:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:10.137 12:33:09 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:10.137 12:33:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:10.137 12:33:09 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:10.137 12:33:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:10.137 12:33:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.137 12:33:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.137 12:33:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.137 12:33:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.137 12:33:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.137 12:33:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.137 12:33:09 -- accel/accel.sh@40 -- # local IFS=, 00:05:10.137 12:33:09 -- accel/accel.sh@41 -- # jq -r . 00:05:10.137 [2024-04-16 12:33:09.171015] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:10.137 [2024-04-16 12:33:09.171080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063727 ] 00:05:10.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.394 [2024-04-16 12:33:09.242618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.394 [2024-04-16 12:33:09.356223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.394 [2024-04-16 12:33:09.356905] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:10.394 [2024-04-16 12:33:09.413107] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:10.652 [2024-04-16 12:33:09.487532] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:10.652 A filename is required. 00:05:10.652 12:33:09 -- common/autotest_common.sh@641 -- # es=234 00:05:10.652 12:33:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:10.652 12:33:09 -- common/autotest_common.sh@650 -- # es=106 00:05:10.652 12:33:09 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:10.652 12:33:09 -- common/autotest_common.sh@658 -- # es=1 00:05:10.652 12:33:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:10.652 00:05:10.653 real 0m0.451s 00:05:10.653 user 0m0.338s 00:05:10.653 sys 0m0.145s 00:05:10.653 12:33:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.653 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.653 ************************************ 00:05:10.653 END TEST accel_missing_filename 00:05:10.653 ************************************ 00:05:10.653 12:33:09 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.653 12:33:09 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:10.653 12:33:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.653 12:33:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.653 ************************************ 00:05:10.653 START TEST accel_compress_verify 00:05:10.653 ************************************ 00:05:10.653 12:33:09 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.653 12:33:09 -- common/autotest_common.sh@638 -- # local es=0 00:05:10.653 12:33:09 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.653 12:33:09 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:10.653 12:33:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:10.653 12:33:09 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:10.653 12:33:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:10.653 12:33:09 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.653 12:33:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:10.653 12:33:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.653 12:33:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.653 12:33:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.653 12:33:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.653 12:33:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.653 12:33:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.653 12:33:09 -- accel/accel.sh@40 -- # local IFS=, 00:05:10.653 12:33:09 -- accel/accel.sh@41 -- # jq -r . 00:05:10.911 [2024-04-16 12:33:09.733116] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:10.911 [2024-04-16 12:33:09.733180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063771 ] 00:05:10.911 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.911 [2024-04-16 12:33:09.803251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.911 [2024-04-16 12:33:09.917499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.911 [2024-04-16 12:33:09.918215] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:10.911 [2024-04-16 12:33:09.978488] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.169 [2024-04-16 12:33:10.061418] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:11.169 00:05:11.169 Compression does not support the verify option, aborting. 00:05:11.169 12:33:10 -- common/autotest_common.sh@641 -- # es=161 00:05:11.169 12:33:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:11.169 12:33:10 -- common/autotest_common.sh@650 -- # es=33 00:05:11.169 12:33:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:11.169 12:33:10 -- common/autotest_common.sh@658 -- # es=1 00:05:11.169 12:33:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:11.169 00:05:11.169 real 0m0.466s 00:05:11.169 user 0m0.343s 00:05:11.169 sys 0m0.152s 00:05:11.169 12:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.169 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.169 ************************************ 00:05:11.169 END TEST accel_compress_verify 00:05:11.169 ************************************ 00:05:11.169 12:33:10 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:11.169 12:33:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:11.169 12:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.169 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.428 ************************************ 00:05:11.428 START TEST accel_wrong_workload 00:05:11.428 ************************************ 00:05:11.428 12:33:10 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:11.428 12:33:10 -- common/autotest_common.sh@638 -- # local es=0 00:05:11.428 12:33:10 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:11.428 12:33:10 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:11.428 12:33:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.428 12:33:10 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:11.428 12:33:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.428 12:33:10 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:11.428 12:33:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:11.428 12:33:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.428 12:33:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.428 12:33:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.428 12:33:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.428 12:33:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.428 12:33:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.428 12:33:10 -- accel/accel.sh@40 -- # local IFS=, 00:05:11.428 12:33:10 -- accel/accel.sh@41 -- # jq -r . 00:05:11.428 Unsupported workload type: foobar 00:05:11.428 [2024-04-16 12:33:10.321044] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:11.428 accel_perf options: 00:05:11.428 [-h help message] 00:05:11.428 [-q queue depth per core] 00:05:11.428 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:11.428 [-T number of threads per core 00:05:11.428 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:11.428 [-t time in seconds] 00:05:11.428 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:11.428 [ dif_verify, , dif_generate, dif_generate_copy 00:05:11.428 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:11.428 [-l for compress/decompress workloads, name of uncompressed input file 00:05:11.428 [-S for crc32c workload, use this seed value (default 0) 00:05:11.428 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:11.428 [-f for fill workload, use this BYTE value (default 255) 00:05:11.428 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:11.428 [-y verify result if this switch is on] 00:05:11.428 [-a tasks to allocate per core (default: same value as -q)] 00:05:11.428 Can be used to spread operations across a wider range of memory. 00:05:11.428 12:33:10 -- common/autotest_common.sh@641 -- # es=1 00:05:11.428 12:33:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:11.428 12:33:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:11.428 12:33:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:11.428 00:05:11.428 real 0m0.022s 00:05:11.428 user 0m0.011s 00:05:11.428 sys 0m0.011s 00:05:11.428 12:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.428 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.428 ************************************ 00:05:11.428 END TEST accel_wrong_workload 00:05:11.428 ************************************ 00:05:11.428 Error: writing output failed: Broken pipe 00:05:11.428 12:33:10 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:11.428 12:33:10 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:11.428 12:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.428 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.428 ************************************ 00:05:11.428 START TEST accel_negative_buffers 00:05:11.428 ************************************ 00:05:11.428 12:33:10 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:11.428 12:33:10 -- common/autotest_common.sh@638 -- # local es=0 00:05:11.428 12:33:10 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:11.428 12:33:10 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:11.428 12:33:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.428 12:33:10 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:11.428 12:33:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.428 12:33:10 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:11.428 12:33:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:11.428 12:33:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.428 12:33:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.428 12:33:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.428 12:33:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.428 12:33:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.428 12:33:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.428 12:33:10 -- accel/accel.sh@40 -- # local IFS=, 00:05:11.428 12:33:10 -- accel/accel.sh@41 -- # jq -r . 00:05:11.428 -x option must be non-negative. 00:05:11.428 [2024-04-16 12:33:10.464192] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:11.428 accel_perf options: 00:05:11.428 [-h help message] 00:05:11.428 [-q queue depth per core] 00:05:11.428 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:11.428 [-T number of threads per core 00:05:11.428 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:11.428 [-t time in seconds] 00:05:11.428 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:11.428 [ dif_verify, , dif_generate, dif_generate_copy 00:05:11.428 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:11.428 [-l for compress/decompress workloads, name of uncompressed input file 00:05:11.428 [-S for crc32c workload, use this seed value (default 0) 00:05:11.428 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:11.428 [-f for fill workload, use this BYTE value (default 255) 00:05:11.428 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:11.428 [-y verify result if this switch is on] 00:05:11.428 [-a tasks to allocate per core (default: same value as -q)] 00:05:11.428 Can be used to spread operations across a wider range of memory. 00:05:11.428 12:33:10 -- common/autotest_common.sh@641 -- # es=1 00:05:11.428 12:33:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:11.428 12:33:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:11.428 12:33:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:11.428 00:05:11.428 real 0m0.023s 00:05:11.428 user 0m0.011s 00:05:11.428 sys 0m0.012s 00:05:11.428 12:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.428 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.428 ************************************ 00:05:11.428 END TEST accel_negative_buffers 00:05:11.428 ************************************ 00:05:11.428 Error: writing output failed: Broken pipe 00:05:11.428 12:33:10 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:11.428 12:33:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:11.428 12:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.428 12:33:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 ************************************ 00:05:11.686 START TEST accel_crc32c 00:05:11.686 ************************************ 00:05:11.686 12:33:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:11.686 12:33:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:11.686 12:33:10 -- accel/accel.sh@17 -- # local accel_module 00:05:11.686 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.686 12:33:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:11.686 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.686 12:33:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:11.686 12:33:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.686 12:33:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.686 12:33:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.686 12:33:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.686 12:33:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.686 12:33:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.686 12:33:10 -- accel/accel.sh@40 -- # local IFS=, 00:05:11.686 12:33:10 -- accel/accel.sh@41 -- # jq -r . 00:05:11.686 [2024-04-16 12:33:10.599647] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:11.686 [2024-04-16 12:33:10.599706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063975 ] 00:05:11.686 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.686 [2024-04-16 12:33:10.672545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.945 [2024-04-16 12:33:10.790524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.945 [2024-04-16 12:33:10.791196] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val= 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val= 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val=0x1 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val= 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val= 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val=crc32c 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val=32 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val= 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val=software 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@22 -- # accel_module=software 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val=32 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val=32 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val=1 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val=Yes 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val= 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:11.945 12:33:10 -- accel/accel.sh@20 -- # val= 00:05:11.945 12:33:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # IFS=: 00:05:11.945 12:33:10 -- accel/accel.sh@19 -- # read -r var val 00:05:13.317 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.317 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.317 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.317 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.317 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.317 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.317 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.317 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.317 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.317 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.317 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.317 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.317 12:33:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.317 12:33:12 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:13.317 12:33:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.317 00:05:13.317 real 0m1.477s 00:05:13.317 user 0m1.330s 00:05:13.317 sys 0m0.150s 00:05:13.317 12:33:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.317 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:05:13.317 ************************************ 00:05:13.317 END TEST accel_crc32c 00:05:13.317 ************************************ 00:05:13.317 12:33:12 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:13.317 12:33:12 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:13.317 12:33:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.317 12:33:12 -- common/autotest_common.sh@10 -- # set +x 00:05:13.317 ************************************ 00:05:13.317 START TEST accel_crc32c_C2 00:05:13.317 ************************************ 00:05:13.317 12:33:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:13.317 12:33:12 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.317 12:33:12 -- accel/accel.sh@17 -- # local accel_module 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.317 12:33:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:13.317 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.317 12:33:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:13.317 12:33:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.317 12:33:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.317 12:33:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.317 12:33:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.317 12:33:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.317 12:33:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.317 12:33:12 -- accel/accel.sh@40 -- # local IFS=, 00:05:13.317 12:33:12 -- accel/accel.sh@41 -- # jq -r . 00:05:13.317 [2024-04-16 12:33:12.204635] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:13.317 [2024-04-16 12:33:12.204709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064150 ] 00:05:13.317 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.317 [2024-04-16 12:33:12.278798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.576 [2024-04-16 12:33:12.395272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.576 [2024-04-16 12:33:12.395927] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val=0x1 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val=crc32c 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val=0 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val=software 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@22 -- # accel_module=software 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val=32 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val=32 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val=1 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val=Yes 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:13.576 12:33:12 -- accel/accel.sh@20 -- # val= 00:05:13.576 12:33:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # IFS=: 00:05:13.576 12:33:12 -- accel/accel.sh@19 -- # read -r var val 00:05:14.948 12:33:13 -- accel/accel.sh@20 -- # val= 00:05:14.948 12:33:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.948 12:33:13 -- accel/accel.sh@19 -- # IFS=: 00:05:14.948 12:33:13 -- accel/accel.sh@19 -- # read -r var val 00:05:14.948 12:33:13 -- accel/accel.sh@20 -- # val= 00:05:14.948 12:33:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.948 12:33:13 -- accel/accel.sh@19 -- # IFS=: 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # read -r var val 00:05:14.949 12:33:13 -- accel/accel.sh@20 -- # val= 00:05:14.949 12:33:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # IFS=: 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # read -r var val 00:05:14.949 12:33:13 -- accel/accel.sh@20 -- # val= 00:05:14.949 12:33:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # IFS=: 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # read -r var val 00:05:14.949 12:33:13 -- accel/accel.sh@20 -- # val= 00:05:14.949 12:33:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # IFS=: 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # read -r var val 00:05:14.949 12:33:13 -- accel/accel.sh@20 -- # val= 00:05:14.949 12:33:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # IFS=: 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # read -r var val 00:05:14.949 12:33:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.949 12:33:13 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:14.949 12:33:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.949 00:05:14.949 real 0m1.473s 00:05:14.949 user 0m1.325s 00:05:14.949 sys 0m0.150s 00:05:14.949 12:33:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.949 12:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:14.949 ************************************ 00:05:14.949 END TEST accel_crc32c_C2 00:05:14.949 ************************************ 00:05:14.949 12:33:13 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:14.949 12:33:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:14.949 12:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.949 12:33:13 -- common/autotest_common.sh@10 -- # set +x 00:05:14.949 ************************************ 00:05:14.949 START TEST accel_copy 00:05:14.949 ************************************ 00:05:14.949 12:33:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:14.949 12:33:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:14.949 12:33:13 -- accel/accel.sh@17 -- # local accel_module 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # IFS=: 00:05:14.949 12:33:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:14.949 12:33:13 -- accel/accel.sh@19 -- # read -r var val 00:05:14.949 12:33:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:14.949 12:33:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.949 12:33:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.949 12:33:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.949 12:33:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.949 12:33:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.949 12:33:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.949 12:33:13 -- accel/accel.sh@40 -- # local IFS=, 00:05:14.949 12:33:13 -- accel/accel.sh@41 -- # jq -r . 00:05:14.949 [2024-04-16 12:33:13.804113] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:14.949 [2024-04-16 12:33:13.804175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064423 ] 00:05:14.949 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.949 [2024-04-16 12:33:13.876920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.949 [2024-04-16 12:33:13.993283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.949 [2024-04-16 12:33:13.993998] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val= 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val= 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val=0x1 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val= 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val= 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val=copy 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val= 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val=software 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val=32 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val=32 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val=1 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val=Yes 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val= 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:15.207 12:33:14 -- accel/accel.sh@20 -- # val= 00:05:15.207 12:33:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # IFS=: 00:05:15.207 12:33:14 -- accel/accel.sh@19 -- # read -r var val 00:05:16.582 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.582 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.582 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.582 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.582 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.582 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.582 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.582 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.582 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.582 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.582 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.582 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.582 12:33:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.582 12:33:15 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:16.582 12:33:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.582 00:05:16.582 real 0m1.485s 00:05:16.582 user 0m1.333s 00:05:16.582 sys 0m0.153s 00:05:16.582 12:33:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.582 12:33:15 -- common/autotest_common.sh@10 -- # set +x 00:05:16.582 ************************************ 00:05:16.582 END TEST accel_copy 00:05:16.582 ************************************ 00:05:16.582 12:33:15 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.582 12:33:15 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:16.582 12:33:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.582 12:33:15 -- common/autotest_common.sh@10 -- # set +x 00:05:16.582 ************************************ 00:05:16.582 START TEST accel_fill 00:05:16.582 ************************************ 00:05:16.582 12:33:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.582 12:33:15 -- accel/accel.sh@16 -- # local accel_opc 00:05:16.582 12:33:15 -- accel/accel.sh@17 -- # local accel_module 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.582 12:33:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.582 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.582 12:33:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.582 12:33:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.582 12:33:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.582 12:33:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.582 12:33:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.582 12:33:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.582 12:33:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.582 12:33:15 -- accel/accel.sh@40 -- # local IFS=, 00:05:16.582 12:33:15 -- accel/accel.sh@41 -- # jq -r . 00:05:16.582 [2024-04-16 12:33:15.408930] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:16.582 [2024-04-16 12:33:15.408991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064590 ] 00:05:16.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.582 [2024-04-16 12:33:15.482041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.582 [2024-04-16 12:33:15.595723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.582 [2024-04-16 12:33:15.596366] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:16.840 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.840 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.840 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val=0x1 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val=fill 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val=0x80 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val=software 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@22 -- # accel_module=software 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val=64 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val=64 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val=1 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val=Yes 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:16.841 12:33:15 -- accel/accel.sh@20 -- # val= 00:05:16.841 12:33:15 -- accel/accel.sh@21 -- # case "$var" in 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # IFS=: 00:05:16.841 12:33:15 -- accel/accel.sh@19 -- # read -r var val 00:05:18.247 12:33:16 -- accel/accel.sh@20 -- # val= 00:05:18.247 12:33:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.247 12:33:16 -- accel/accel.sh@19 -- # IFS=: 00:05:18.247 12:33:16 -- accel/accel.sh@19 -- # read -r var val 00:05:18.247 12:33:16 -- accel/accel.sh@20 -- # val= 00:05:18.247 12:33:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:16 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:16 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:16 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:16 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.248 12:33:16 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:18.248 12:33:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.248 00:05:18.248 real 0m1.466s 00:05:18.248 user 0m1.324s 00:05:18.248 sys 0m0.143s 00:05:18.248 12:33:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.248 12:33:16 -- common/autotest_common.sh@10 -- # set +x 00:05:18.248 ************************************ 00:05:18.248 END TEST accel_fill 00:05:18.248 ************************************ 00:05:18.248 12:33:16 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:18.248 12:33:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:18.248 12:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.248 12:33:16 -- common/autotest_common.sh@10 -- # set +x 00:05:18.248 ************************************ 00:05:18.248 START TEST accel_copy_crc32c 00:05:18.248 ************************************ 00:05:18.248 12:33:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:18.248 12:33:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.248 12:33:16 -- accel/accel.sh@17 -- # local accel_module 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:18.248 12:33:16 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:18.248 12:33:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.248 12:33:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.248 12:33:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.248 12:33:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.248 12:33:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.248 12:33:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.248 12:33:16 -- accel/accel.sh@40 -- # local IFS=, 00:05:18.248 12:33:16 -- accel/accel.sh@41 -- # jq -r . 00:05:18.248 [2024-04-16 12:33:16.993368] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:18.248 [2024-04-16 12:33:16.993428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064855 ] 00:05:18.248 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.248 [2024-04-16 12:33:17.066416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.248 [2024-04-16 12:33:17.183140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.248 [2024-04-16 12:33:17.183842] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val=0x1 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val=0 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val=software 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@22 -- # accel_module=software 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val=32 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val=32 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val=1 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val=Yes 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:18.248 12:33:17 -- accel/accel.sh@20 -- # val= 00:05:18.248 12:33:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # IFS=: 00:05:18.248 12:33:17 -- accel/accel.sh@19 -- # read -r var val 00:05:19.621 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.621 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.621 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.621 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.621 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.621 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.621 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.621 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.621 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.621 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.621 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.621 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.621 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.622 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.622 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.622 12:33:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.622 12:33:18 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:19.622 12:33:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.622 00:05:19.622 real 0m1.484s 00:05:19.622 user 0m1.336s 00:05:19.622 sys 0m0.149s 00:05:19.622 12:33:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.622 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:19.622 ************************************ 00:05:19.622 END TEST accel_copy_crc32c 00:05:19.622 ************************************ 00:05:19.622 12:33:18 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:19.622 12:33:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:19.622 12:33:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.622 12:33:18 -- common/autotest_common.sh@10 -- # set +x 00:05:19.622 ************************************ 00:05:19.622 START TEST accel_copy_crc32c_C2 00:05:19.622 ************************************ 00:05:19.622 12:33:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:19.622 12:33:18 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.622 12:33:18 -- accel/accel.sh@17 -- # local accel_module 00:05:19.622 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.622 12:33:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:19.622 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.622 12:33:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:19.622 12:33:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.622 12:33:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.622 12:33:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.622 12:33:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.622 12:33:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.622 12:33:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.622 12:33:18 -- accel/accel.sh@40 -- # local IFS=, 00:05:19.622 12:33:18 -- accel/accel.sh@41 -- # jq -r . 00:05:19.622 [2024-04-16 12:33:18.604307] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:19.622 [2024-04-16 12:33:18.604371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065038 ] 00:05:19.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.622 [2024-04-16 12:33:18.678249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.880 [2024-04-16 12:33:18.794875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.880 [2024-04-16 12:33:18.795591] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:19.880 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val=0x1 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val=0 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val=software 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@22 -- # accel_module=software 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val=32 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val=32 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val=1 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val=Yes 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:19.881 12:33:18 -- accel/accel.sh@20 -- # val= 00:05:19.881 12:33:18 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # IFS=: 00:05:19.881 12:33:18 -- accel/accel.sh@19 -- # read -r var val 00:05:21.254 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.254 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.254 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.254 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.254 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.254 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.254 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.254 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.254 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.254 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.254 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.254 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.254 12:33:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.254 12:33:20 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:21.254 12:33:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.254 00:05:21.254 real 0m1.487s 00:05:21.254 user 0m1.333s 00:05:21.254 sys 0m0.154s 00:05:21.254 12:33:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.254 12:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:21.254 ************************************ 00:05:21.254 END TEST accel_copy_crc32c_C2 00:05:21.254 ************************************ 00:05:21.254 12:33:20 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:21.254 12:33:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:21.254 12:33:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.254 12:33:20 -- common/autotest_common.sh@10 -- # set +x 00:05:21.254 ************************************ 00:05:21.254 START TEST accel_dualcast 00:05:21.254 ************************************ 00:05:21.254 12:33:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:21.254 12:33:20 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.254 12:33:20 -- accel/accel.sh@17 -- # local accel_module 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.254 12:33:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:21.254 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.254 12:33:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:21.254 12:33:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.254 12:33:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.254 12:33:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.254 12:33:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.254 12:33:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.254 12:33:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.254 12:33:20 -- accel/accel.sh@40 -- # local IFS=, 00:05:21.254 12:33:20 -- accel/accel.sh@41 -- # jq -r . 00:05:21.254 [2024-04-16 12:33:20.209259] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:21.254 [2024-04-16 12:33:20.209327] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065210 ] 00:05:21.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.254 [2024-04-16 12:33:20.282390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.513 [2024-04-16 12:33:20.405538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.513 [2024-04-16 12:33:20.406255] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val=0x1 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val=dualcast 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val=software 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@22 -- # accel_module=software 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val=32 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val=32 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val=1 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val=Yes 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:21.513 12:33:20 -- accel/accel.sh@20 -- # val= 00:05:21.513 12:33:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # IFS=: 00:05:21.513 12:33:20 -- accel/accel.sh@19 -- # read -r var val 00:05:22.886 12:33:21 -- accel/accel.sh@20 -- # val= 00:05:22.886 12:33:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # IFS=: 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # read -r var val 00:05:22.886 12:33:21 -- accel/accel.sh@20 -- # val= 00:05:22.886 12:33:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # IFS=: 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # read -r var val 00:05:22.886 12:33:21 -- accel/accel.sh@20 -- # val= 00:05:22.886 12:33:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # IFS=: 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # read -r var val 00:05:22.886 12:33:21 -- accel/accel.sh@20 -- # val= 00:05:22.886 12:33:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # IFS=: 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # read -r var val 00:05:22.886 12:33:21 -- accel/accel.sh@20 -- # val= 00:05:22.886 12:33:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # IFS=: 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # read -r var val 00:05:22.886 12:33:21 -- accel/accel.sh@20 -- # val= 00:05:22.886 12:33:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # IFS=: 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # read -r var val 00:05:22.886 12:33:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.886 12:33:21 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:22.886 12:33:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.886 00:05:22.886 real 0m1.490s 00:05:22.886 user 0m1.336s 00:05:22.886 sys 0m0.153s 00:05:22.886 12:33:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.886 12:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.886 ************************************ 00:05:22.886 END TEST accel_dualcast 00:05:22.886 ************************************ 00:05:22.886 12:33:21 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:22.886 12:33:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:22.886 12:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.886 12:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.886 ************************************ 00:05:22.886 START TEST accel_compare 00:05:22.886 ************************************ 00:05:22.886 12:33:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:22.886 12:33:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.886 12:33:21 -- accel/accel.sh@17 -- # local accel_module 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # IFS=: 00:05:22.886 12:33:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:22.886 12:33:21 -- accel/accel.sh@19 -- # read -r var val 00:05:22.886 12:33:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:22.886 12:33:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.886 12:33:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.886 12:33:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.886 12:33:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.886 12:33:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.886 12:33:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.886 12:33:21 -- accel/accel.sh@40 -- # local IFS=, 00:05:22.886 12:33:21 -- accel/accel.sh@41 -- # jq -r . 00:05:22.886 [2024-04-16 12:33:21.819634] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:22.886 [2024-04-16 12:33:21.819701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065486 ] 00:05:22.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.886 [2024-04-16 12:33:21.891417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.144 [2024-04-16 12:33:22.008054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.145 [2024-04-16 12:33:22.008758] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val= 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val= 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val=0x1 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val= 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val= 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val=compare 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val= 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val=software 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@22 -- # accel_module=software 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val=32 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val=32 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val=1 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val=Yes 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val= 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:23.145 12:33:22 -- accel/accel.sh@20 -- # val= 00:05:23.145 12:33:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # IFS=: 00:05:23.145 12:33:22 -- accel/accel.sh@19 -- # read -r var val 00:05:24.517 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.517 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.517 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.517 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.517 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.517 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.517 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.517 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.517 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.517 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.517 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.517 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.517 12:33:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.517 12:33:23 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:24.517 12:33:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.517 00:05:24.517 real 0m1.485s 00:05:24.517 user 0m1.339s 00:05:24.517 sys 0m0.146s 00:05:24.517 12:33:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.517 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.517 ************************************ 00:05:24.517 END TEST accel_compare 00:05:24.517 ************************************ 00:05:24.517 12:33:23 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:24.517 12:33:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:24.517 12:33:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.517 12:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.517 ************************************ 00:05:24.517 START TEST accel_xor 00:05:24.517 ************************************ 00:05:24.517 12:33:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:24.517 12:33:23 -- accel/accel.sh@16 -- # local accel_opc 00:05:24.517 12:33:23 -- accel/accel.sh@17 -- # local accel_module 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.517 12:33:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:24.517 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.517 12:33:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:24.517 12:33:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.517 12:33:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.517 12:33:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.517 12:33:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.517 12:33:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.517 12:33:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.517 12:33:23 -- accel/accel.sh@40 -- # local IFS=, 00:05:24.517 12:33:23 -- accel/accel.sh@41 -- # jq -r . 00:05:24.517 [2024-04-16 12:33:23.423787] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:24.517 [2024-04-16 12:33:23.423859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065651 ] 00:05:24.517 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.517 [2024-04-16 12:33:23.495778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.775 [2024-04-16 12:33:23.613176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.775 [2024-04-16 12:33:23.613876] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val=0x1 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val=xor 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val=2 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val=software 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@22 -- # accel_module=software 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val=32 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val=32 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val=1 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val=Yes 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:24.775 12:33:23 -- accel/accel.sh@20 -- # val= 00:05:24.775 12:33:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # IFS=: 00:05:24.775 12:33:23 -- accel/accel.sh@19 -- # read -r var val 00:05:26.151 12:33:24 -- accel/accel.sh@20 -- # val= 00:05:26.151 12:33:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # IFS=: 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # read -r var val 00:05:26.151 12:33:24 -- accel/accel.sh@20 -- # val= 00:05:26.151 12:33:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # IFS=: 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # read -r var val 00:05:26.151 12:33:24 -- accel/accel.sh@20 -- # val= 00:05:26.151 12:33:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # IFS=: 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # read -r var val 00:05:26.151 12:33:24 -- accel/accel.sh@20 -- # val= 00:05:26.151 12:33:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # IFS=: 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # read -r var val 00:05:26.151 12:33:24 -- accel/accel.sh@20 -- # val= 00:05:26.151 12:33:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # IFS=: 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # read -r var val 00:05:26.151 12:33:24 -- accel/accel.sh@20 -- # val= 00:05:26.151 12:33:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # IFS=: 00:05:26.151 12:33:24 -- accel/accel.sh@19 -- # read -r var val 00:05:26.151 12:33:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.151 12:33:24 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:26.151 12:33:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.151 00:05:26.151 real 0m1.479s 00:05:26.151 user 0m1.335s 00:05:26.151 sys 0m0.145s 00:05:26.151 12:33:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.151 12:33:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.151 ************************************ 00:05:26.151 END TEST accel_xor 00:05:26.151 ************************************ 00:05:26.151 12:33:24 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:26.151 12:33:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:26.151 12:33:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.151 12:33:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.151 ************************************ 00:05:26.151 START TEST accel_xor 00:05:26.151 ************************************ 00:05:26.151 12:33:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:26.151 12:33:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.151 12:33:25 -- accel/accel.sh@17 -- # local accel_module 00:05:26.151 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.151 12:33:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:26.151 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.151 12:33:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:26.151 12:33:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.151 12:33:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.151 12:33:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.151 12:33:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.151 12:33:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.151 12:33:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.151 12:33:25 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.151 12:33:25 -- accel/accel.sh@41 -- # jq -r . 00:05:26.151 [2024-04-16 12:33:25.025966] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:26.151 [2024-04-16 12:33:25.026030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065935 ] 00:05:26.151 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.151 [2024-04-16 12:33:25.100529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.409 [2024-04-16 12:33:25.220770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.409 [2024-04-16 12:33:25.221467] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:26.409 12:33:25 -- accel/accel.sh@20 -- # val= 00:05:26.409 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.409 12:33:25 -- accel/accel.sh@20 -- # val= 00:05:26.409 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.409 12:33:25 -- accel/accel.sh@20 -- # val=0x1 00:05:26.409 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.409 12:33:25 -- accel/accel.sh@20 -- # val= 00:05:26.409 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.409 12:33:25 -- accel/accel.sh@20 -- # val= 00:05:26.409 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.409 12:33:25 -- accel/accel.sh@20 -- # val=xor 00:05:26.409 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.409 12:33:25 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.409 12:33:25 -- accel/accel.sh@20 -- # val=3 00:05:26.409 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.409 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.409 12:33:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.409 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val= 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val=software 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val=32 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val=32 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val=1 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val=Yes 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val= 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:26.410 12:33:25 -- accel/accel.sh@20 -- # val= 00:05:26.410 12:33:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # IFS=: 00:05:26.410 12:33:25 -- accel/accel.sh@19 -- # read -r var val 00:05:27.780 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:27.780 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:27.780 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:27.780 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:27.780 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:27.780 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:27.780 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:27.780 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:27.780 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:27.780 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:27.780 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:27.780 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:27.780 12:33:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.780 12:33:26 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:27.780 12:33:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.780 00:05:27.780 real 0m1.500s 00:05:27.780 user 0m1.336s 00:05:27.780 sys 0m0.166s 00:05:27.780 12:33:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.780 12:33:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.780 ************************************ 00:05:27.780 END TEST accel_xor 00:05:27.780 ************************************ 00:05:27.780 12:33:26 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:27.780 12:33:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:27.780 12:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.780 12:33:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.780 ************************************ 00:05:27.780 START TEST accel_dif_verify 00:05:27.780 ************************************ 00:05:27.780 12:33:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:27.780 12:33:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.780 12:33:26 -- accel/accel.sh@17 -- # local accel_module 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:27.780 12:33:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:27.780 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:27.780 12:33:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:27.780 12:33:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.780 12:33:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.780 12:33:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.780 12:33:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.780 12:33:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.780 12:33:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.780 12:33:26 -- accel/accel.sh@40 -- # local IFS=, 00:05:27.780 12:33:26 -- accel/accel.sh@41 -- # jq -r . 00:05:27.780 [2024-04-16 12:33:26.651777] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:27.780 [2024-04-16 12:33:26.651850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066098 ] 00:05:27.780 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.780 [2024-04-16 12:33:26.724541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.780 [2024-04-16 12:33:26.845187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.780 [2024-04-16 12:33:26.845909] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val=0x1 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val=dif_verify 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val=software 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@22 -- # accel_module=software 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val=32 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val=32 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val=1 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val=No 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:28.039 12:33:26 -- accel/accel.sh@20 -- # val= 00:05:28.039 12:33:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # IFS=: 00:05:28.039 12:33:26 -- accel/accel.sh@19 -- # read -r var val 00:05:29.452 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.452 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.452 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.452 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.452 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.452 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.452 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.452 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.452 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.452 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.452 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.452 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.452 12:33:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.452 12:33:28 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:29.452 12:33:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.452 00:05:29.452 real 0m1.498s 00:05:29.452 user 0m1.349s 00:05:29.452 sys 0m0.152s 00:05:29.452 12:33:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.452 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.452 ************************************ 00:05:29.452 END TEST accel_dif_verify 00:05:29.452 ************************************ 00:05:29.452 12:33:28 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:29.452 12:33:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:29.452 12:33:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.452 12:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.452 ************************************ 00:05:29.452 START TEST accel_dif_generate 00:05:29.452 ************************************ 00:05:29.452 12:33:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:29.452 12:33:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:29.452 12:33:28 -- accel/accel.sh@17 -- # local accel_module 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.452 12:33:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:29.452 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.452 12:33:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:29.452 12:33:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.452 12:33:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.452 12:33:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.452 12:33:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.452 12:33:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.452 12:33:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.452 12:33:28 -- accel/accel.sh@40 -- # local IFS=, 00:05:29.452 12:33:28 -- accel/accel.sh@41 -- # jq -r . 00:05:29.452 [2024-04-16 12:33:28.268792] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:29.452 [2024-04-16 12:33:28.268857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066370 ] 00:05:29.452 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.452 [2024-04-16 12:33:28.344661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.452 [2024-04-16 12:33:28.473394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.452 [2024-04-16 12:33:28.474086] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val=0x1 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val=dif_generate 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val=software 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@22 -- # accel_module=software 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val=32 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val=32 00:05:29.709 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.709 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.709 12:33:28 -- accel/accel.sh@20 -- # val=1 00:05:29.710 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.710 12:33:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.710 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.710 12:33:28 -- accel/accel.sh@20 -- # val=No 00:05:29.710 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.710 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.710 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:29.710 12:33:28 -- accel/accel.sh@20 -- # val= 00:05:29.710 12:33:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # IFS=: 00:05:29.710 12:33:28 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:29 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:29 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:29 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:29 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:29 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:29 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.083 12:33:29 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:31.083 12:33:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.083 00:05:31.083 real 0m1.496s 00:05:31.083 user 0m1.344s 00:05:31.083 sys 0m0.156s 00:05:31.083 12:33:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.083 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.083 ************************************ 00:05:31.083 END TEST accel_dif_generate 00:05:31.083 ************************************ 00:05:31.083 12:33:29 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:31.083 12:33:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:31.083 12:33:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.083 12:33:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.083 ************************************ 00:05:31.083 START TEST accel_dif_generate_copy 00:05:31.083 ************************************ 00:05:31.083 12:33:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:31.083 12:33:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.083 12:33:29 -- accel/accel.sh@17 -- # local accel_module 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:29 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:31.083 12:33:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:31.083 12:33:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.083 12:33:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.083 12:33:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.083 12:33:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.083 12:33:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.083 12:33:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.083 12:33:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:31.083 12:33:29 -- accel/accel.sh@41 -- # jq -r . 00:05:31.083 [2024-04-16 12:33:29.889530] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:31.083 [2024-04-16 12:33:29.889606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066549 ] 00:05:31.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.083 [2024-04-16 12:33:29.961521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.083 [2024-04-16 12:33:30.087207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.083 [2024-04-16 12:33:30.087897] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:31.083 12:33:30 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:30 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:30 -- accel/accel.sh@20 -- # val=0x1 00:05:31.083 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:30 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.083 12:33:30 -- accel/accel.sh@20 -- # val= 00:05:31.083 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.083 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.084 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.084 12:33:30 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:31.084 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.084 12:33:30 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:31.084 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val= 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val=software 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@22 -- # accel_module=software 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val=32 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val=32 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val=1 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val=No 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val= 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:31.341 12:33:30 -- accel/accel.sh@20 -- # val= 00:05:31.341 12:33:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # IFS=: 00:05:31.341 12:33:30 -- accel/accel.sh@19 -- # read -r var val 00:05:32.720 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.720 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.720 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.720 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.720 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.720 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.720 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.720 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.720 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.720 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.720 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.720 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.720 12:33:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.720 12:33:31 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:32.720 12:33:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.720 00:05:32.720 real 0m1.502s 00:05:32.720 user 0m1.347s 00:05:32.720 sys 0m0.155s 00:05:32.720 12:33:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.720 12:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:32.720 ************************************ 00:05:32.720 END TEST accel_dif_generate_copy 00:05:32.720 ************************************ 00:05:32.720 12:33:31 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:32.720 12:33:31 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.720 12:33:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:32.720 12:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.720 12:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:32.720 ************************************ 00:05:32.720 START TEST accel_comp 00:05:32.720 ************************************ 00:05:32.720 12:33:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.720 12:33:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:32.720 12:33:31 -- accel/accel.sh@17 -- # local accel_module 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.720 12:33:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.720 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.720 12:33:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.720 12:33:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.720 12:33:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.720 12:33:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.720 12:33:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.720 12:33:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.720 12:33:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.721 12:33:31 -- accel/accel.sh@40 -- # local IFS=, 00:05:32.721 12:33:31 -- accel/accel.sh@41 -- # jq -r . 00:05:32.721 [2024-04-16 12:33:31.514745] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:32.721 [2024-04-16 12:33:31.514810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066712 ] 00:05:32.721 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.721 [2024-04-16 12:33:31.588408] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.721 [2024-04-16 12:33:31.707209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.721 [2024-04-16 12:33:31.707902] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val=0x1 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val=compress 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val=software 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@22 -- # accel_module=software 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val=32 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val=32 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val=1 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val=No 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:32.721 12:33:31 -- accel/accel.sh@20 -- # val= 00:05:32.721 12:33:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # IFS=: 00:05:32.721 12:33:31 -- accel/accel.sh@19 -- # read -r var val 00:05:34.096 12:33:32 -- accel/accel.sh@20 -- # val= 00:05:34.096 12:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.096 12:33:32 -- accel/accel.sh@19 -- # IFS=: 00:05:34.096 12:33:32 -- accel/accel.sh@19 -- # read -r var val 00:05:34.096 12:33:32 -- accel/accel.sh@20 -- # val= 00:05:34.096 12:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.096 12:33:32 -- accel/accel.sh@19 -- # IFS=: 00:05:34.096 12:33:32 -- accel/accel.sh@19 -- # read -r var val 00:05:34.096 12:33:32 -- accel/accel.sh@20 -- # val= 00:05:34.096 12:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.097 12:33:32 -- accel/accel.sh@19 -- # IFS=: 00:05:34.097 12:33:32 -- accel/accel.sh@19 -- # read -r var val 00:05:34.097 12:33:32 -- accel/accel.sh@20 -- # val= 00:05:34.097 12:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.097 12:33:32 -- accel/accel.sh@19 -- # IFS=: 00:05:34.097 12:33:32 -- accel/accel.sh@19 -- # read -r var val 00:05:34.097 12:33:32 -- accel/accel.sh@20 -- # val= 00:05:34.097 12:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.097 12:33:32 -- accel/accel.sh@19 -- # IFS=: 00:05:34.097 12:33:32 -- accel/accel.sh@19 -- # read -r var val 00:05:34.097 12:33:32 -- accel/accel.sh@20 -- # val= 00:05:34.097 12:33:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.097 12:33:32 -- accel/accel.sh@19 -- # IFS=: 00:05:34.097 12:33:32 -- accel/accel.sh@19 -- # read -r var val 00:05:34.097 12:33:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.097 12:33:32 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:34.097 12:33:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.097 00:05:34.097 real 0m1.488s 00:05:34.097 user 0m1.332s 00:05:34.097 sys 0m0.157s 00:05:34.097 12:33:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.097 12:33:32 -- common/autotest_common.sh@10 -- # set +x 00:05:34.097 ************************************ 00:05:34.097 END TEST accel_comp 00:05:34.097 ************************************ 00:05:34.097 12:33:33 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:34.097 12:33:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:34.097 12:33:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.097 12:33:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.097 ************************************ 00:05:34.097 START TEST accel_decomp 00:05:34.097 ************************************ 00:05:34.097 12:33:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:34.097 12:33:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.097 12:33:33 -- accel/accel.sh@17 -- # local accel_module 00:05:34.097 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.097 12:33:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:34.097 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.097 12:33:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:34.097 12:33:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.097 12:33:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.097 12:33:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.097 12:33:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.097 12:33:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.097 12:33:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.097 12:33:33 -- accel/accel.sh@40 -- # local IFS=, 00:05:34.097 12:33:33 -- accel/accel.sh@41 -- # jq -r . 00:05:34.097 [2024-04-16 12:33:33.119672] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:34.097 [2024-04-16 12:33:33.119737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066997 ] 00:05:34.097 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.355 [2024-04-16 12:33:33.191377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.355 [2024-04-16 12:33:33.312216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.355 [2024-04-16 12:33:33.312961] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val= 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val= 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val= 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val=0x1 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val= 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val= 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val=decompress 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val= 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val=software 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@22 -- # accel_module=software 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val=32 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val=32 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val=1 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val=Yes 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val= 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:34.355 12:33:33 -- accel/accel.sh@20 -- # val= 00:05:34.355 12:33:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # IFS=: 00:05:34.355 12:33:33 -- accel/accel.sh@19 -- # read -r var val 00:05:35.728 12:33:34 -- accel/accel.sh@20 -- # val= 00:05:35.728 12:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # IFS=: 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # read -r var val 00:05:35.728 12:33:34 -- accel/accel.sh@20 -- # val= 00:05:35.728 12:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # IFS=: 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # read -r var val 00:05:35.728 12:33:34 -- accel/accel.sh@20 -- # val= 00:05:35.728 12:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # IFS=: 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # read -r var val 00:05:35.728 12:33:34 -- accel/accel.sh@20 -- # val= 00:05:35.728 12:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # IFS=: 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # read -r var val 00:05:35.728 12:33:34 -- accel/accel.sh@20 -- # val= 00:05:35.728 12:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # IFS=: 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # read -r var val 00:05:35.728 12:33:34 -- accel/accel.sh@20 -- # val= 00:05:35.728 12:33:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # IFS=: 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # read -r var val 00:05:35.728 12:33:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.728 12:33:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:35.728 12:33:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.728 00:05:35.728 real 0m1.499s 00:05:35.728 user 0m1.350s 00:05:35.728 sys 0m0.152s 00:05:35.728 12:33:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.728 12:33:34 -- common/autotest_common.sh@10 -- # set +x 00:05:35.728 ************************************ 00:05:35.728 END TEST accel_decomp 00:05:35.728 ************************************ 00:05:35.728 12:33:34 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:35.728 12:33:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:35.728 12:33:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.728 12:33:34 -- common/autotest_common.sh@10 -- # set +x 00:05:35.728 ************************************ 00:05:35.728 START TEST accel_decmop_full 00:05:35.728 ************************************ 00:05:35.728 12:33:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:35.728 12:33:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.728 12:33:34 -- accel/accel.sh@17 -- # local accel_module 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # IFS=: 00:05:35.728 12:33:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:35.728 12:33:34 -- accel/accel.sh@19 -- # read -r var val 00:05:35.728 12:33:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:35.728 12:33:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.728 12:33:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.728 12:33:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.728 12:33:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.728 12:33:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.728 12:33:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.728 12:33:34 -- accel/accel.sh@40 -- # local IFS=, 00:05:35.728 12:33:34 -- accel/accel.sh@41 -- # jq -r . 00:05:35.729 [2024-04-16 12:33:34.747864] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:35.729 [2024-04-16 12:33:34.747929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067161 ] 00:05:35.729 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.987 [2024-04-16 12:33:34.820632] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.987 [2024-04-16 12:33:34.941329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.987 [2024-04-16 12:33:34.942054] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val= 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val= 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val= 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val=0x1 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val= 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val= 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val=decompress 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val= 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val=software 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@22 -- # accel_module=software 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val=32 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val=32 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val=1 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val=Yes 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val= 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:35.987 12:33:35 -- accel/accel.sh@20 -- # val= 00:05:35.987 12:33:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # IFS=: 00:05:35.987 12:33:35 -- accel/accel.sh@19 -- # read -r var val 00:05:37.362 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.362 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.362 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.362 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.362 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.362 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.362 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.362 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.362 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.362 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.362 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.362 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.362 12:33:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.362 12:33:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:37.362 12:33:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.362 00:05:37.362 real 0m1.507s 00:05:37.362 user 0m1.352s 00:05:37.362 sys 0m0.157s 00:05:37.362 12:33:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.362 12:33:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.362 ************************************ 00:05:37.362 END TEST accel_decmop_full 00:05:37.362 ************************************ 00:05:37.362 12:33:36 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:37.362 12:33:36 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:37.362 12:33:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.362 12:33:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.362 ************************************ 00:05:37.362 START TEST accel_decomp_mcore 00:05:37.362 ************************************ 00:05:37.362 12:33:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:37.362 12:33:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.362 12:33:36 -- accel/accel.sh@17 -- # local accel_module 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.362 12:33:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:37.362 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.362 12:33:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:37.362 12:33:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.362 12:33:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.362 12:33:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.362 12:33:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.362 12:33:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.362 12:33:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.362 12:33:36 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.362 12:33:36 -- accel/accel.sh@41 -- # jq -r . 00:05:37.362 [2024-04-16 12:33:36.383087] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:37.362 [2024-04-16 12:33:36.383151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067446 ] 00:05:37.362 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.620 [2024-04-16 12:33:36.457465] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.620 [2024-04-16 12:33:36.580890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.620 [2024-04-16 12:33:36.580945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.620 [2024-04-16 12:33:36.580997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.620 [2024-04-16 12:33:36.581000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.620 [2024-04-16 12:33:36.581819] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val=0xf 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val=decompress 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val=software 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@22 -- # accel_module=software 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val=32 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val=32 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.620 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.620 12:33:36 -- accel/accel.sh@20 -- # val=1 00:05:37.620 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.621 12:33:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.621 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.621 12:33:36 -- accel/accel.sh@20 -- # val=Yes 00:05:37.621 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.621 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.621 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:37.621 12:33:36 -- accel/accel.sh@20 -- # val= 00:05:37.621 12:33:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # IFS=: 00:05:37.621 12:33:36 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@20 -- # val= 00:05:38.996 12:33:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:37 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.996 12:33:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.996 12:33:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.996 00:05:38.996 real 0m1.509s 00:05:38.996 user 0m4.822s 00:05:38.996 sys 0m0.156s 00:05:38.996 12:33:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.996 12:33:37 -- common/autotest_common.sh@10 -- # set +x 00:05:38.996 ************************************ 00:05:38.996 END TEST accel_decomp_mcore 00:05:38.996 ************************************ 00:05:38.996 12:33:37 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.996 12:33:37 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:38.996 12:33:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.996 12:33:37 -- common/autotest_common.sh@10 -- # set +x 00:05:38.996 ************************************ 00:05:38.996 START TEST accel_decomp_full_mcore 00:05:38.996 ************************************ 00:05:38.996 12:33:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.996 12:33:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.996 12:33:38 -- accel/accel.sh@17 -- # local accel_module 00:05:38.996 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:38.996 12:33:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.996 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:38.996 12:33:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:38.996 12:33:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.996 12:33:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.996 12:33:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.996 12:33:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.996 12:33:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.996 12:33:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.996 12:33:38 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.996 12:33:38 -- accel/accel.sh@41 -- # jq -r . 00:05:38.996 [2024-04-16 12:33:38.019115] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:38.996 [2024-04-16 12:33:38.019179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067614 ] 00:05:38.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.255 [2024-04-16 12:33:38.095109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.255 [2024-04-16 12:33:38.219008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.255 [2024-04-16 12:33:38.219061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.255 [2024-04-16 12:33:38.219110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.255 [2024-04-16 12:33:38.219114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.255 [2024-04-16 12:33:38.219935] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val= 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val= 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val= 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val=0xf 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val= 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val= 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val=decompress 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val= 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val=software 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val=32 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val=32 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val=1 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val=Yes 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val= 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:39.255 12:33:38 -- accel/accel.sh@20 -- # val= 00:05:39.255 12:33:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # IFS=: 00:05:39.255 12:33:38 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.629 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.629 12:33:39 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:40.629 12:33:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.629 00:05:40.629 real 0m1.520s 00:05:40.629 user 0m4.852s 00:05:40.629 sys 0m0.163s 00:05:40.629 12:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.629 12:33:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.629 ************************************ 00:05:40.629 END TEST accel_decomp_full_mcore 00:05:40.629 ************************************ 00:05:40.629 12:33:39 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:40.629 12:33:39 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:40.629 12:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.629 12:33:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.629 ************************************ 00:05:40.629 START TEST accel_decomp_mthread 00:05:40.629 ************************************ 00:05:40.629 12:33:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:40.629 12:33:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.629 12:33:39 -- accel/accel.sh@17 -- # local accel_module 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.629 12:33:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:40.629 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.629 12:33:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:40.629 12:33:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.629 12:33:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.629 12:33:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.629 12:33:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.629 12:33:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.629 12:33:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.629 12:33:39 -- accel/accel.sh@40 -- # local IFS=, 00:05:40.629 12:33:39 -- accel/accel.sh@41 -- # jq -r . 00:05:40.629 [2024-04-16 12:33:39.665256] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:40.629 [2024-04-16 12:33:39.665322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067884 ] 00:05:40.888 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.888 [2024-04-16 12:33:39.738748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.888 [2024-04-16 12:33:39.856974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.888 [2024-04-16 12:33:39.857673] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val=0x1 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val=decompress 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val=software 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@22 -- # accel_module=software 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val=32 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val=32 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val=2 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val=Yes 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:40.888 12:33:39 -- accel/accel.sh@20 -- # val= 00:05:40.888 12:33:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # IFS=: 00:05:40.888 12:33:39 -- accel/accel.sh@19 -- # read -r var val 00:05:42.260 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.260 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.260 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.261 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.261 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.261 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.261 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.261 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.261 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.261 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.261 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.261 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.261 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.261 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.261 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.261 12:33:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.261 12:33:41 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:42.261 12:33:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.261 00:05:42.261 real 0m1.499s 00:05:42.261 user 0m1.349s 00:05:42.261 sys 0m0.151s 00:05:42.261 12:33:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.261 12:33:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.261 ************************************ 00:05:42.261 END TEST accel_decomp_mthread 00:05:42.261 ************************************ 00:05:42.261 12:33:41 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:42.261 12:33:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:42.261 12:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.261 12:33:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.261 ************************************ 00:05:42.261 START TEST accel_deomp_full_mthread 00:05:42.261 ************************************ 00:05:42.261 12:33:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:42.261 12:33:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.261 12:33:41 -- accel/accel.sh@17 -- # local accel_module 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.261 12:33:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:42.261 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.261 12:33:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:42.261 12:33:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.261 12:33:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.261 12:33:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.261 12:33:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.261 12:33:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.261 12:33:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.261 12:33:41 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.261 12:33:41 -- accel/accel.sh@41 -- # jq -r . 00:05:42.261 [2024-04-16 12:33:41.294277] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:42.261 [2024-04-16 12:33:41.294346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1068061 ] 00:05:42.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.519 [2024-04-16 12:33:41.368155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.519 [2024-04-16 12:33:41.485185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.519 [2024-04-16 12:33:41.485864] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val=0x1 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val=decompress 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val=software 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val=32 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val=32 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val=2 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val=Yes 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:42.519 12:33:41 -- accel/accel.sh@20 -- # val= 00:05:42.519 12:33:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # IFS=: 00:05:42.519 12:33:41 -- accel/accel.sh@19 -- # read -r var val 00:05:43.893 12:33:42 -- accel/accel.sh@20 -- # val= 00:05:43.893 12:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # IFS=: 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # read -r var val 00:05:43.893 12:33:42 -- accel/accel.sh@20 -- # val= 00:05:43.893 12:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # IFS=: 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # read -r var val 00:05:43.893 12:33:42 -- accel/accel.sh@20 -- # val= 00:05:43.893 12:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # IFS=: 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # read -r var val 00:05:43.893 12:33:42 -- accel/accel.sh@20 -- # val= 00:05:43.893 12:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # IFS=: 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # read -r var val 00:05:43.893 12:33:42 -- accel/accel.sh@20 -- # val= 00:05:43.893 12:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # IFS=: 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # read -r var val 00:05:43.893 12:33:42 -- accel/accel.sh@20 -- # val= 00:05:43.893 12:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # IFS=: 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # read -r var val 00:05:43.893 12:33:42 -- accel/accel.sh@20 -- # val= 00:05:43.893 12:33:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # IFS=: 00:05:43.893 12:33:42 -- accel/accel.sh@19 -- # read -r var val 00:05:43.893 12:33:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.893 12:33:42 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:43.893 12:33:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.893 00:05:43.893 real 0m1.516s 00:05:43.893 user 0m1.362s 00:05:43.893 sys 0m0.156s 00:05:43.893 12:33:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.893 12:33:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.893 ************************************ 00:05:43.893 END TEST accel_deomp_full_mthread 00:05:43.893 ************************************ 00:05:43.893 12:33:42 -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:43.893 12:33:42 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:43.893 12:33:42 -- accel/accel.sh@137 -- # build_accel_config 00:05:43.893 12:33:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.893 12:33:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:43.893 12:33:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.893 12:33:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.893 12:33:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.893 12:33:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.893 12:33:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.893 12:33:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.893 12:33:42 -- accel/accel.sh@40 -- # local IFS=, 00:05:43.893 12:33:42 -- accel/accel.sh@41 -- # jq -r . 00:05:43.893 ************************************ 00:05:43.893 START TEST accel_dif_functional_tests 00:05:43.893 ************************************ 00:05:43.893 12:33:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:44.151 [2024-04-16 12:33:42.962457] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:44.151 [2024-04-16 12:33:42.962527] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1068232 ] 00:05:44.151 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.151 [2024-04-16 12:33:43.040428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.151 [2024-04-16 12:33:43.162307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.151 [2024-04-16 12:33:43.162364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.151 [2024-04-16 12:33:43.162367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.151 [2024-04-16 12:33:43.163183] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:05:44.410 00:05:44.410 00:05:44.410 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.410 http://cunit.sourceforge.net/ 00:05:44.410 00:05:44.410 00:05:44.410 Suite: accel_dif 00:05:44.410 Test: verify: DIF generated, GUARD check ...passed 00:05:44.410 Test: verify: DIF generated, APPTAG check ...passed 00:05:44.410 Test: verify: DIF generated, REFTAG check ...passed 00:05:44.410 Test: verify: DIF not generated, GUARD check ...[2024-04-16 12:33:43.261552] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:44.410 [2024-04-16 12:33:43.261639] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:44.410 passed 00:05:44.410 Test: verify: DIF not generated, APPTAG check ...[2024-04-16 12:33:43.261685] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:44.410 [2024-04-16 12:33:43.261718] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:44.410 passed 00:05:44.410 Test: verify: DIF not generated, REFTAG check ...[2024-04-16 12:33:43.261755] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:44.410 [2024-04-16 12:33:43.261787] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:44.410 passed 00:05:44.410 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:44.410 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-16 12:33:43.261866] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:44.410 passed 00:05:44.410 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:44.410 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:44.410 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:44.410 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-16 12:33:43.262024] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:44.410 passed 00:05:44.410 Test: generate copy: DIF generated, GUARD check ...passed 00:05:44.410 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:44.410 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:44.410 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:44.410 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:44.410 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:44.410 Test: generate copy: iovecs-len validate ...[2024-04-16 12:33:43.262286] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:44.410 passed 00:05:44.410 Test: generate copy: buffer alignment validate ...passed 00:05:44.410 00:05:44.410 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.410 suites 1 1 n/a 0 0 00:05:44.410 tests 20 20 20 0 0 00:05:44.410 asserts 204 204 204 0 n/a 00:05:44.410 00:05:44.410 Elapsed time = 0.003 seconds 00:05:44.668 00:05:44.668 real 0m0.606s 00:05:44.668 user 0m0.838s 00:05:44.668 sys 0m0.204s 00:05:44.668 12:33:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.668 12:33:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.668 ************************************ 00:05:44.668 END TEST accel_dif_functional_tests 00:05:44.668 ************************************ 00:05:44.668 00:05:44.668 real 0m35.678s 00:05:44.668 user 0m37.588s 00:05:44.668 sys 0m5.792s 00:05:44.668 12:33:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.668 12:33:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.668 ************************************ 00:05:44.668 END TEST accel 00:05:44.668 ************************************ 00:05:44.668 12:33:43 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:44.668 12:33:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.668 12:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.668 12:33:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.668 ************************************ 00:05:44.668 START TEST accel_rpc 00:05:44.668 ************************************ 00:05:44.668 12:33:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:44.668 * Looking for test storage... 00:05:44.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:44.668 12:33:43 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.668 12:33:43 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1068426 00:05:44.668 12:33:43 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:44.668 12:33:43 -- accel/accel_rpc.sh@15 -- # waitforlisten 1068426 00:05:44.668 12:33:43 -- common/autotest_common.sh@817 -- # '[' -z 1068426 ']' 00:05:44.668 12:33:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.668 12:33:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.668 12:33:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.668 12:33:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.668 12:33:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.926 [2024-04-16 12:33:43.777668] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:44.926 [2024-04-16 12:33:43.777763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1068426 ] 00:05:44.926 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.926 [2024-04-16 12:33:43.850344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.926 [2024-04-16 12:33:43.968356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.860 12:33:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.860 12:33:44 -- common/autotest_common.sh@850 -- # return 0 00:05:45.860 12:33:44 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:45.860 12:33:44 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:45.860 12:33:44 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:45.860 12:33:44 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:45.860 12:33:44 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:45.860 12:33:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.860 12:33:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.860 12:33:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.860 ************************************ 00:05:45.860 START TEST accel_assign_opcode 00:05:45.860 ************************************ 00:05:45.860 12:33:44 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:05:45.860 12:33:44 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:45.860 12:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:45.860 12:33:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.860 [2024-04-16 12:33:44.859154] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:45.860 12:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:45.860 12:33:44 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:45.860 12:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:45.860 12:33:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.860 [2024-04-16 12:33:44.867160] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:45.860 12:33:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:45.860 12:33:44 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:45.860 12:33:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:45.860 12:33:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.118 12:33:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.118 12:33:45 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:46.118 12:33:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.118 12:33:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.118 12:33:45 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:46.118 12:33:45 -- accel/accel_rpc.sh@42 -- # grep software 00:05:46.118 12:33:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.118 software 00:05:46.118 00:05:46.118 real 0m0.311s 00:05:46.118 user 0m0.042s 00:05:46.118 sys 0m0.007s 00:05:46.118 12:33:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.118 12:33:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.118 ************************************ 00:05:46.118 END TEST accel_assign_opcode 00:05:46.118 ************************************ 00:05:46.376 12:33:45 -- accel/accel_rpc.sh@55 -- # killprocess 1068426 00:05:46.376 12:33:45 -- common/autotest_common.sh@936 -- # '[' -z 1068426 ']' 00:05:46.376 12:33:45 -- common/autotest_common.sh@940 -- # kill -0 1068426 00:05:46.376 12:33:45 -- common/autotest_common.sh@941 -- # uname 00:05:46.376 12:33:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.376 12:33:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1068426 00:05:46.376 12:33:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.376 12:33:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.376 12:33:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1068426' 00:05:46.376 killing process with pid 1068426 00:05:46.376 12:33:45 -- common/autotest_common.sh@955 -- # kill 1068426 00:05:46.376 12:33:45 -- common/autotest_common.sh@960 -- # wait 1068426 00:05:46.634 00:05:46.635 real 0m2.011s 00:05:46.635 user 0m2.173s 00:05:46.635 sys 0m0.534s 00:05:46.635 12:33:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.635 12:33:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.635 ************************************ 00:05:46.635 END TEST accel_rpc 00:05:46.635 ************************************ 00:05:46.893 12:33:45 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:46.893 12:33:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.893 12:33:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.894 12:33:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.894 ************************************ 00:05:46.894 START TEST app_cmdline 00:05:46.894 ************************************ 00:05:46.894 12:33:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:46.894 * Looking for test storage... 00:05:46.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:46.894 12:33:45 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:46.894 12:33:45 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1068779 00:05:46.894 12:33:45 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:46.894 12:33:45 -- app/cmdline.sh@18 -- # waitforlisten 1068779 00:05:46.894 12:33:45 -- common/autotest_common.sh@817 -- # '[' -z 1068779 ']' 00:05:46.894 12:33:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.894 12:33:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.894 12:33:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.894 12:33:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.894 12:33:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.894 [2024-04-16 12:33:45.916255] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:05:46.894 [2024-04-16 12:33:45.916338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1068779 ] 00:05:46.894 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.152 [2024-04-16 12:33:45.982210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.152 [2024-04-16 12:33:46.082936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.440 12:33:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.440 12:33:46 -- common/autotest_common.sh@850 -- # return 0 00:05:47.440 12:33:46 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:47.698 { 00:05:47.698 "version": "SPDK v24.05-pre git sha1 1b4773b8f", 00:05:47.698 "fields": { 00:05:47.698 "major": 24, 00:05:47.698 "minor": 5, 00:05:47.698 "patch": 0, 00:05:47.698 "suffix": "-pre", 00:05:47.698 "commit": "1b4773b8f" 00:05:47.698 } 00:05:47.698 } 00:05:47.698 12:33:46 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:47.698 12:33:46 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:47.698 12:33:46 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:47.698 12:33:46 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:47.698 12:33:46 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:47.698 12:33:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.698 12:33:46 -- common/autotest_common.sh@10 -- # set +x 00:05:47.698 12:33:46 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:47.698 12:33:46 -- app/cmdline.sh@26 -- # sort 00:05:47.698 12:33:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.698 12:33:46 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:47.698 12:33:46 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:47.698 12:33:46 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.698 12:33:46 -- common/autotest_common.sh@638 -- # local es=0 00:05:47.698 12:33:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.698 12:33:46 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.698 12:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:47.698 12:33:46 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.698 12:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:47.698 12:33:46 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.698 12:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:47.698 12:33:46 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.698 12:33:46 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:47.698 12:33:46 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:47.956 request: 00:05:47.956 { 00:05:47.956 "method": "env_dpdk_get_mem_stats", 00:05:47.956 "req_id": 1 00:05:47.956 } 00:05:47.956 Got JSON-RPC error response 00:05:47.956 response: 00:05:47.956 { 00:05:47.956 "code": -32601, 00:05:47.956 "message": "Method not found" 00:05:47.956 } 00:05:47.956 12:33:46 -- common/autotest_common.sh@641 -- # es=1 00:05:47.956 12:33:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:47.956 12:33:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:47.956 12:33:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:47.956 12:33:46 -- app/cmdline.sh@1 -- # killprocess 1068779 00:05:47.956 12:33:46 -- common/autotest_common.sh@936 -- # '[' -z 1068779 ']' 00:05:47.956 12:33:46 -- common/autotest_common.sh@940 -- # kill -0 1068779 00:05:47.956 12:33:46 -- common/autotest_common.sh@941 -- # uname 00:05:47.956 12:33:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.956 12:33:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1068779 00:05:47.956 12:33:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.956 12:33:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.956 12:33:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1068779' 00:05:47.956 killing process with pid 1068779 00:05:47.956 12:33:46 -- common/autotest_common.sh@955 -- # kill 1068779 00:05:47.956 12:33:46 -- common/autotest_common.sh@960 -- # wait 1068779 00:05:48.522 00:05:48.522 real 0m1.573s 00:05:48.522 user 0m1.900s 00:05:48.522 sys 0m0.461s 00:05:48.522 12:33:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.522 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.522 ************************************ 00:05:48.522 END TEST app_cmdline 00:05:48.522 ************************************ 00:05:48.522 12:33:47 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:48.522 12:33:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.522 12:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.522 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.522 ************************************ 00:05:48.522 START TEST version 00:05:48.522 ************************************ 00:05:48.522 12:33:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:48.522 * Looking for test storage... 00:05:48.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:48.522 12:33:47 -- app/version.sh@17 -- # get_header_version major 00:05:48.522 12:33:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:48.522 12:33:47 -- app/version.sh@14 -- # cut -f2 00:05:48.522 12:33:47 -- app/version.sh@14 -- # tr -d '"' 00:05:48.522 12:33:47 -- app/version.sh@17 -- # major=24 00:05:48.522 12:33:47 -- app/version.sh@18 -- # get_header_version minor 00:05:48.523 12:33:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:48.523 12:33:47 -- app/version.sh@14 -- # cut -f2 00:05:48.523 12:33:47 -- app/version.sh@14 -- # tr -d '"' 00:05:48.523 12:33:47 -- app/version.sh@18 -- # minor=5 00:05:48.523 12:33:47 -- app/version.sh@19 -- # get_header_version patch 00:05:48.523 12:33:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:48.523 12:33:47 -- app/version.sh@14 -- # cut -f2 00:05:48.523 12:33:47 -- app/version.sh@14 -- # tr -d '"' 00:05:48.523 12:33:47 -- app/version.sh@19 -- # patch=0 00:05:48.523 12:33:47 -- app/version.sh@20 -- # get_header_version suffix 00:05:48.523 12:33:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:48.523 12:33:47 -- app/version.sh@14 -- # cut -f2 00:05:48.523 12:33:47 -- app/version.sh@14 -- # tr -d '"' 00:05:48.523 12:33:47 -- app/version.sh@20 -- # suffix=-pre 00:05:48.523 12:33:47 -- app/version.sh@22 -- # version=24.5 00:05:48.523 12:33:47 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:48.523 12:33:47 -- app/version.sh@28 -- # version=24.5rc0 00:05:48.523 12:33:47 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:48.523 12:33:47 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:48.781 12:33:47 -- app/version.sh@30 -- # py_version=24.5rc0 00:05:48.781 12:33:47 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:48.781 00:05:48.781 real 0m0.111s 00:05:48.781 user 0m0.055s 00:05:48.781 sys 0m0.079s 00:05:48.781 12:33:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.781 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.781 ************************************ 00:05:48.781 END TEST version 00:05:48.781 ************************************ 00:05:48.782 12:33:47 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:48.782 12:33:47 -- spdk/autotest.sh@194 -- # uname -s 00:05:48.782 12:33:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:48.782 12:33:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:48.782 12:33:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:48.782 12:33:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:48.782 12:33:47 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:05:48.782 12:33:47 -- spdk/autotest.sh@258 -- # timing_exit lib 00:05:48.782 12:33:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:48.782 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.782 12:33:47 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:48.782 12:33:47 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:05:48.782 12:33:47 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:05:48.782 12:33:47 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:05:48.782 12:33:47 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:05:48.782 12:33:47 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:05:48.782 12:33:47 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:48.782 12:33:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:48.782 12:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.782 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.782 ************************************ 00:05:48.782 START TEST nvmf_tcp 00:05:48.782 ************************************ 00:05:48.782 12:33:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:48.782 * Looking for test storage... 00:05:48.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:48.782 12:33:47 -- nvmf/nvmf.sh@10 -- # uname -s 00:05:48.782 12:33:47 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:48.782 12:33:47 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.782 12:33:47 -- nvmf/common.sh@7 -- # uname -s 00:05:48.782 12:33:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.782 12:33:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.782 12:33:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.782 12:33:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.782 12:33:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.782 12:33:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.782 12:33:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.782 12:33:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.782 12:33:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.782 12:33:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.782 12:33:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:48.782 12:33:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:48.782 12:33:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.782 12:33:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.782 12:33:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:48.782 12:33:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.782 12:33:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.782 12:33:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.782 12:33:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.782 12:33:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.782 12:33:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.782 12:33:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.782 12:33:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.782 12:33:47 -- paths/export.sh@5 -- # export PATH 00:05:48.782 12:33:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.782 12:33:47 -- nvmf/common.sh@47 -- # : 0 00:05:48.782 12:33:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:48.782 12:33:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:48.782 12:33:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.782 12:33:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.782 12:33:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.782 12:33:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:48.782 12:33:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:48.782 12:33:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:48.782 12:33:47 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:48.782 12:33:47 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:48.782 12:33:47 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:48.782 12:33:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:48.782 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.782 12:33:47 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:48.782 12:33:47 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:48.782 12:33:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:48.782 12:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.782 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:49.041 ************************************ 00:05:49.041 START TEST nvmf_example 00:05:49.041 ************************************ 00:05:49.041 12:33:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:49.041 * Looking for test storage... 00:05:49.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:49.041 12:33:47 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.041 12:33:47 -- nvmf/common.sh@7 -- # uname -s 00:05:49.041 12:33:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.041 12:33:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.041 12:33:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.041 12:33:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.041 12:33:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.041 12:33:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.041 12:33:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.041 12:33:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.041 12:33:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.041 12:33:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.041 12:33:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:49.041 12:33:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:49.041 12:33:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.041 12:33:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.041 12:33:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:49.041 12:33:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.041 12:33:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.041 12:33:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.041 12:33:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.041 12:33:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.041 12:33:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.041 12:33:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.041 12:33:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.041 12:33:47 -- paths/export.sh@5 -- # export PATH 00:05:49.041 12:33:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.041 12:33:47 -- nvmf/common.sh@47 -- # : 0 00:05:49.041 12:33:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.041 12:33:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.041 12:33:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.041 12:33:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.041 12:33:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.041 12:33:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.041 12:33:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.041 12:33:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.041 12:33:47 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:49.041 12:33:47 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:49.041 12:33:47 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:49.041 12:33:47 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:49.041 12:33:47 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:49.041 12:33:47 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:49.041 12:33:47 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:49.041 12:33:47 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:49.041 12:33:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:49.041 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:49.041 12:33:47 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:49.041 12:33:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:49.041 12:33:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:49.041 12:33:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:49.041 12:33:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:49.041 12:33:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:49.041 12:33:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:49.041 12:33:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:49.041 12:33:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:49.041 12:33:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:49.041 12:33:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:49.041 12:33:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:49.041 12:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.571 12:33:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:51.571 12:33:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:51.572 12:33:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:51.572 12:33:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:51.572 12:33:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:51.572 12:33:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:51.572 12:33:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:51.572 12:33:50 -- nvmf/common.sh@295 -- # net_devs=() 00:05:51.572 12:33:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:51.572 12:33:50 -- nvmf/common.sh@296 -- # e810=() 00:05:51.572 12:33:50 -- nvmf/common.sh@296 -- # local -ga e810 00:05:51.572 12:33:50 -- nvmf/common.sh@297 -- # x722=() 00:05:51.572 12:33:50 -- nvmf/common.sh@297 -- # local -ga x722 00:05:51.572 12:33:50 -- nvmf/common.sh@298 -- # mlx=() 00:05:51.572 12:33:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:51.572 12:33:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:51.572 12:33:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:51.572 12:33:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:51.572 12:33:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:51.572 12:33:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:51.572 12:33:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:05:51.572 Found 0000:82:00.0 (0x8086 - 0x159b) 00:05:51.572 12:33:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:51.572 12:33:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:05:51.572 Found 0000:82:00.1 (0x8086 - 0x159b) 00:05:51.572 12:33:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:51.572 12:33:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:51.572 12:33:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:51.572 12:33:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:51.572 12:33:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:51.572 12:33:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:05:51.572 Found net devices under 0000:82:00.0: cvl_0_0 00:05:51.572 12:33:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:51.572 12:33:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:51.572 12:33:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:51.572 12:33:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:51.572 12:33:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:51.572 12:33:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:05:51.572 Found net devices under 0000:82:00.1: cvl_0_1 00:05:51.572 12:33:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:51.572 12:33:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:51.572 12:33:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:51.572 12:33:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:51.572 12:33:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:51.572 12:33:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:51.572 12:33:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:51.572 12:33:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:51.572 12:33:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:51.572 12:33:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:51.572 12:33:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:51.572 12:33:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:51.572 12:33:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:51.572 12:33:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:51.572 12:33:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:51.572 12:33:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:51.572 12:33:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:51.572 12:33:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:51.572 12:33:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:51.572 12:33:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:51.572 12:33:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:51.572 12:33:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:51.572 12:33:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:51.572 12:33:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:51.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:51.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:05:51.572 00:05:51.572 --- 10.0.0.2 ping statistics --- 00:05:51.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.572 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:05:51.572 12:33:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:51.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:51.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:05:51.572 00:05:51.572 --- 10.0.0.1 ping statistics --- 00:05:51.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.572 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:05:51.572 12:33:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:51.572 12:33:50 -- nvmf/common.sh@411 -- # return 0 00:05:51.572 12:33:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:51.572 12:33:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:51.572 12:33:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:51.572 12:33:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:51.572 12:33:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:51.572 12:33:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:51.572 12:33:50 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:51.572 12:33:50 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:51.572 12:33:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:51.572 12:33:50 -- common/autotest_common.sh@10 -- # set +x 00:05:51.572 12:33:50 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:51.572 12:33:50 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:51.572 12:33:50 -- target/nvmf_example.sh@34 -- # nvmfpid=1071119 00:05:51.572 12:33:50 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:51.572 12:33:50 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:51.572 12:33:50 -- target/nvmf_example.sh@36 -- # waitforlisten 1071119 00:05:51.572 12:33:50 -- common/autotest_common.sh@817 -- # '[' -z 1071119 ']' 00:05:51.572 12:33:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.572 12:33:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.572 12:33:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.573 12:33:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.573 12:33:50 -- common/autotest_common.sh@10 -- # set +x 00:05:51.830 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.763 12:33:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.763 12:33:51 -- common/autotest_common.sh@850 -- # return 0 00:05:52.763 12:33:51 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:52.763 12:33:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:52.763 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:52.763 12:33:51 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:52.763 12:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.763 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:52.763 12:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.763 12:33:51 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:52.763 12:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.763 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:52.763 12:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.763 12:33:51 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:52.763 12:33:51 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.763 12:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.763 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:52.763 12:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.763 12:33:51 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:52.763 12:33:51 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:52.763 12:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.763 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:52.763 12:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.763 12:33:51 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:52.763 12:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.763 12:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:52.763 12:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.763 12:33:51 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:52.763 12:33:51 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:52.763 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.959 Initializing NVMe Controllers 00:06:04.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:04.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:04.959 Initialization complete. Launching workers. 00:06:04.959 ======================================================== 00:06:04.959 Latency(us) 00:06:04.959 Device Information : IOPS MiB/s Average min max 00:06:04.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14647.40 57.22 4370.44 862.21 16269.33 00:06:04.959 ======================================================== 00:06:04.959 Total : 14647.40 57.22 4370.44 862.21 16269.33 00:06:04.959 00:06:04.959 12:34:01 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:04.959 12:34:01 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:04.959 12:34:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:04.959 12:34:01 -- nvmf/common.sh@117 -- # sync 00:06:04.959 12:34:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:04.959 12:34:01 -- nvmf/common.sh@120 -- # set +e 00:06:04.959 12:34:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:04.959 12:34:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:04.959 rmmod nvme_tcp 00:06:04.959 rmmod nvme_fabrics 00:06:04.959 rmmod nvme_keyring 00:06:04.959 12:34:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:04.959 12:34:01 -- nvmf/common.sh@124 -- # set -e 00:06:04.959 12:34:01 -- nvmf/common.sh@125 -- # return 0 00:06:04.959 12:34:01 -- nvmf/common.sh@478 -- # '[' -n 1071119 ']' 00:06:04.959 12:34:01 -- nvmf/common.sh@479 -- # killprocess 1071119 00:06:04.959 12:34:01 -- common/autotest_common.sh@936 -- # '[' -z 1071119 ']' 00:06:04.959 12:34:01 -- common/autotest_common.sh@940 -- # kill -0 1071119 00:06:04.959 12:34:01 -- common/autotest_common.sh@941 -- # uname 00:06:04.959 12:34:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.959 12:34:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1071119 00:06:04.959 12:34:02 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:04.959 12:34:02 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:04.959 12:34:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1071119' 00:06:04.959 killing process with pid 1071119 00:06:04.959 12:34:02 -- common/autotest_common.sh@955 -- # kill 1071119 00:06:04.959 12:34:02 -- common/autotest_common.sh@960 -- # wait 1071119 00:06:04.959 nvmf threads initialize successfully 00:06:04.959 bdev subsystem init successfully 00:06:04.959 created a nvmf target service 00:06:04.959 create targets's poll groups done 00:06:04.959 all subsystems of target started 00:06:04.959 nvmf target is running 00:06:04.959 all subsystems of target stopped 00:06:04.959 destroy targets's poll groups done 00:06:04.959 destroyed the nvmf target service 00:06:04.959 bdev subsystem finish successfully 00:06:04.959 nvmf threads destroy successfully 00:06:04.959 12:34:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:04.959 12:34:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:04.959 12:34:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:04.959 12:34:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:04.959 12:34:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:04.959 12:34:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.959 12:34:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:04.959 12:34:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.546 12:34:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:05.546 12:34:04 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:05.546 12:34:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:05.546 12:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:05.546 00:06:05.546 real 0m16.421s 00:06:05.546 user 0m45.355s 00:06:05.546 sys 0m3.811s 00:06:05.546 12:34:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.546 12:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:05.546 ************************************ 00:06:05.546 END TEST nvmf_example 00:06:05.546 ************************************ 00:06:05.546 12:34:04 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:05.546 12:34:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:05.546 12:34:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.546 12:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:05.546 ************************************ 00:06:05.546 START TEST nvmf_filesystem 00:06:05.546 ************************************ 00:06:05.546 12:34:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:05.546 * Looking for test storage... 00:06:05.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.546 12:34:04 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:05.546 12:34:04 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:05.546 12:34:04 -- common/autotest_common.sh@34 -- # set -e 00:06:05.546 12:34:04 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:05.546 12:34:04 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:05.546 12:34:04 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:05.546 12:34:04 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:05.546 12:34:04 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:05.546 12:34:04 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:05.546 12:34:04 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:05.546 12:34:04 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:05.546 12:34:04 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:05.546 12:34:04 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:05.546 12:34:04 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:05.546 12:34:04 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:05.546 12:34:04 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:05.546 12:34:04 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:05.546 12:34:04 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:05.546 12:34:04 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:05.546 12:34:04 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:05.546 12:34:04 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:05.546 12:34:04 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:05.546 12:34:04 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:05.546 12:34:04 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:05.546 12:34:04 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:05.546 12:34:04 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:05.546 12:34:04 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:05.546 12:34:04 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:05.546 12:34:04 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:05.546 12:34:04 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:05.546 12:34:04 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:05.546 12:34:04 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:05.546 12:34:04 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:05.546 12:34:04 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:05.546 12:34:04 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:05.546 12:34:04 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:05.546 12:34:04 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:05.546 12:34:04 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:05.546 12:34:04 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:05.546 12:34:04 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:05.546 12:34:04 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:05.546 12:34:04 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:05.546 12:34:04 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:05.546 12:34:04 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:05.546 12:34:04 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:05.546 12:34:04 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:05.546 12:34:04 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:05.546 12:34:04 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:05.546 12:34:04 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:05.546 12:34:04 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:05.546 12:34:04 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:05.546 12:34:04 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:05.546 12:34:04 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:05.546 12:34:04 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:05.546 12:34:04 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:05.546 12:34:04 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:05.546 12:34:04 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:05.546 12:34:04 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:05.546 12:34:04 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:05.546 12:34:04 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:05.546 12:34:04 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:05.546 12:34:04 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:05.546 12:34:04 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:05.546 12:34:04 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:05.546 12:34:04 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:05.546 12:34:04 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:05.546 12:34:04 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:05.546 12:34:04 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:05.546 12:34:04 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:05.546 12:34:04 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:05.546 12:34:04 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:05.546 12:34:04 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:05.546 12:34:04 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:05.546 12:34:04 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:05.546 12:34:04 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:05.546 12:34:04 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:05.546 12:34:04 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:05.546 12:34:04 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:05.546 12:34:04 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:05.546 12:34:04 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:05.546 12:34:04 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:05.546 12:34:04 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:05.546 12:34:04 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:05.546 12:34:04 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:05.546 12:34:04 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:05.546 12:34:04 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:05.546 12:34:04 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:05.546 12:34:04 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:05.546 12:34:04 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:05.546 12:34:04 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:05.546 12:34:04 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:05.546 12:34:04 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:05.546 12:34:04 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:05.546 12:34:04 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:05.546 12:34:04 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:05.546 12:34:04 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:05.547 12:34:04 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:05.547 12:34:04 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:05.547 12:34:04 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:05.547 12:34:04 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:05.547 12:34:04 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:05.547 12:34:04 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:05.547 12:34:04 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:05.547 12:34:04 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:05.547 12:34:04 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:05.547 12:34:04 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:05.547 #define SPDK_CONFIG_H 00:06:05.547 #define SPDK_CONFIG_APPS 1 00:06:05.547 #define SPDK_CONFIG_ARCH native 00:06:05.547 #undef SPDK_CONFIG_ASAN 00:06:05.547 #undef SPDK_CONFIG_AVAHI 00:06:05.547 #undef SPDK_CONFIG_CET 00:06:05.547 #define SPDK_CONFIG_COVERAGE 1 00:06:05.547 #define SPDK_CONFIG_CROSS_PREFIX 00:06:05.547 #undef SPDK_CONFIG_CRYPTO 00:06:05.547 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:05.547 #undef SPDK_CONFIG_CUSTOMOCF 00:06:05.547 #undef SPDK_CONFIG_DAOS 00:06:05.547 #define SPDK_CONFIG_DAOS_DIR 00:06:05.547 #define SPDK_CONFIG_DEBUG 1 00:06:05.547 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:05.547 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:05.547 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:05.547 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:05.547 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:05.547 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:05.547 #define SPDK_CONFIG_EXAMPLES 1 00:06:05.547 #undef SPDK_CONFIG_FC 00:06:05.547 #define SPDK_CONFIG_FC_PATH 00:06:05.547 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:05.547 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:05.547 #undef SPDK_CONFIG_FUSE 00:06:05.547 #undef SPDK_CONFIG_FUZZER 00:06:05.547 #define SPDK_CONFIG_FUZZER_LIB 00:06:05.547 #undef SPDK_CONFIG_GOLANG 00:06:05.547 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:05.547 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:05.547 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:05.547 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:05.547 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:05.547 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:05.547 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:05.547 #define SPDK_CONFIG_IDXD 1 00:06:05.547 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:05.547 #undef SPDK_CONFIG_IPSEC_MB 00:06:05.547 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:05.547 #define SPDK_CONFIG_ISAL 1 00:06:05.547 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:05.547 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:05.547 #define SPDK_CONFIG_LIBDIR 00:06:05.547 #undef SPDK_CONFIG_LTO 00:06:05.547 #define SPDK_CONFIG_MAX_LCORES 00:06:05.547 #define SPDK_CONFIG_NVME_CUSE 1 00:06:05.547 #undef SPDK_CONFIG_OCF 00:06:05.547 #define SPDK_CONFIG_OCF_PATH 00:06:05.547 #define SPDK_CONFIG_OPENSSL_PATH 00:06:05.547 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:05.547 #define SPDK_CONFIG_PGO_DIR 00:06:05.547 #undef SPDK_CONFIG_PGO_USE 00:06:05.547 #define SPDK_CONFIG_PREFIX /usr/local 00:06:05.547 #undef SPDK_CONFIG_RAID5F 00:06:05.547 #undef SPDK_CONFIG_RBD 00:06:05.547 #define SPDK_CONFIG_RDMA 1 00:06:05.547 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:05.547 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:05.547 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:05.547 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:05.547 #define SPDK_CONFIG_SHARED 1 00:06:05.547 #undef SPDK_CONFIG_SMA 00:06:05.547 #define SPDK_CONFIG_TESTS 1 00:06:05.547 #undef SPDK_CONFIG_TSAN 00:06:05.547 #define SPDK_CONFIG_UBLK 1 00:06:05.547 #define SPDK_CONFIG_UBSAN 1 00:06:05.547 #undef SPDK_CONFIG_UNIT_TESTS 00:06:05.547 #undef SPDK_CONFIG_URING 00:06:05.547 #define SPDK_CONFIG_URING_PATH 00:06:05.547 #undef SPDK_CONFIG_URING_ZNS 00:06:05.547 #undef SPDK_CONFIG_USDT 00:06:05.547 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:05.547 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:05.547 #define SPDK_CONFIG_VFIO_USER 1 00:06:05.547 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:05.547 #define SPDK_CONFIG_VHOST 1 00:06:05.547 #define SPDK_CONFIG_VIRTIO 1 00:06:05.547 #undef SPDK_CONFIG_VTUNE 00:06:05.547 #define SPDK_CONFIG_VTUNE_DIR 00:06:05.547 #define SPDK_CONFIG_WERROR 1 00:06:05.547 #define SPDK_CONFIG_WPDK_DIR 00:06:05.547 #undef SPDK_CONFIG_XNVME 00:06:05.547 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:05.547 12:34:04 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:05.547 12:34:04 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.547 12:34:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.547 12:34:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.547 12:34:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.547 12:34:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.547 12:34:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.547 12:34:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.547 12:34:04 -- paths/export.sh@5 -- # export PATH 00:06:05.547 12:34:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.547 12:34:04 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:05.547 12:34:04 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:05.547 12:34:04 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:05.547 12:34:04 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:05.547 12:34:04 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:05.547 12:34:04 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:05.547 12:34:04 -- pm/common@67 -- # TEST_TAG=N/A 00:06:05.547 12:34:04 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:05.547 12:34:04 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:05.547 12:34:04 -- pm/common@71 -- # uname -s 00:06:05.547 12:34:04 -- pm/common@71 -- # PM_OS=Linux 00:06:05.547 12:34:04 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:05.547 12:34:04 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:05.547 12:34:04 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:05.547 12:34:04 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:05.547 12:34:04 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:05.547 12:34:04 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:05.547 12:34:04 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:05.547 12:34:04 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:05.547 12:34:04 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:05.547 12:34:04 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:05.547 12:34:04 -- common/autotest_common.sh@57 -- # : 0 00:06:05.547 12:34:04 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:05.547 12:34:04 -- common/autotest_common.sh@61 -- # : 0 00:06:05.547 12:34:04 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:05.547 12:34:04 -- common/autotest_common.sh@63 -- # : 0 00:06:05.547 12:34:04 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:05.547 12:34:04 -- common/autotest_common.sh@65 -- # : 1 00:06:05.547 12:34:04 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:05.547 12:34:04 -- common/autotest_common.sh@67 -- # : 0 00:06:05.547 12:34:04 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:05.547 12:34:04 -- common/autotest_common.sh@69 -- # : 00:06:05.547 12:34:04 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:05.547 12:34:04 -- common/autotest_common.sh@71 -- # : 0 00:06:05.547 12:34:04 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:05.547 12:34:04 -- common/autotest_common.sh@73 -- # : 0 00:06:05.547 12:34:04 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:05.547 12:34:04 -- common/autotest_common.sh@75 -- # : 0 00:06:05.547 12:34:04 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:05.547 12:34:04 -- common/autotest_common.sh@77 -- # : 0 00:06:05.547 12:34:04 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:05.547 12:34:04 -- common/autotest_common.sh@79 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:05.548 12:34:04 -- common/autotest_common.sh@81 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:05.548 12:34:04 -- common/autotest_common.sh@83 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:05.548 12:34:04 -- common/autotest_common.sh@85 -- # : 1 00:06:05.548 12:34:04 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:05.548 12:34:04 -- common/autotest_common.sh@87 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:05.548 12:34:04 -- common/autotest_common.sh@89 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:05.548 12:34:04 -- common/autotest_common.sh@91 -- # : 1 00:06:05.548 12:34:04 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:05.548 12:34:04 -- common/autotest_common.sh@93 -- # : 1 00:06:05.548 12:34:04 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:05.548 12:34:04 -- common/autotest_common.sh@95 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:05.548 12:34:04 -- common/autotest_common.sh@97 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:05.548 12:34:04 -- common/autotest_common.sh@99 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:05.548 12:34:04 -- common/autotest_common.sh@101 -- # : tcp 00:06:05.548 12:34:04 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:05.548 12:34:04 -- common/autotest_common.sh@103 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:05.548 12:34:04 -- common/autotest_common.sh@105 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:05.548 12:34:04 -- common/autotest_common.sh@107 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:05.548 12:34:04 -- common/autotest_common.sh@109 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:05.548 12:34:04 -- common/autotest_common.sh@111 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:05.548 12:34:04 -- common/autotest_common.sh@113 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:05.548 12:34:04 -- common/autotest_common.sh@115 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:05.548 12:34:04 -- common/autotest_common.sh@117 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:05.548 12:34:04 -- common/autotest_common.sh@119 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:05.548 12:34:04 -- common/autotest_common.sh@121 -- # : 1 00:06:05.548 12:34:04 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:05.548 12:34:04 -- common/autotest_common.sh@123 -- # : 00:06:05.548 12:34:04 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:05.548 12:34:04 -- common/autotest_common.sh@125 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:05.548 12:34:04 -- common/autotest_common.sh@127 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:05.548 12:34:04 -- common/autotest_common.sh@129 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:05.548 12:34:04 -- common/autotest_common.sh@131 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:05.548 12:34:04 -- common/autotest_common.sh@133 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:05.548 12:34:04 -- common/autotest_common.sh@135 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:05.548 12:34:04 -- common/autotest_common.sh@137 -- # : 00:06:05.548 12:34:04 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:05.548 12:34:04 -- common/autotest_common.sh@139 -- # : true 00:06:05.548 12:34:04 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:05.548 12:34:04 -- common/autotest_common.sh@141 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:05.548 12:34:04 -- common/autotest_common.sh@143 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:05.548 12:34:04 -- common/autotest_common.sh@145 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:05.548 12:34:04 -- common/autotest_common.sh@147 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:05.548 12:34:04 -- common/autotest_common.sh@149 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:05.548 12:34:04 -- common/autotest_common.sh@151 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:05.548 12:34:04 -- common/autotest_common.sh@153 -- # : e810 00:06:05.548 12:34:04 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:05.548 12:34:04 -- common/autotest_common.sh@155 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:05.548 12:34:04 -- common/autotest_common.sh@157 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:05.548 12:34:04 -- common/autotest_common.sh@159 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:05.548 12:34:04 -- common/autotest_common.sh@161 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:05.548 12:34:04 -- common/autotest_common.sh@163 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:05.548 12:34:04 -- common/autotest_common.sh@166 -- # : 00:06:05.548 12:34:04 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:05.548 12:34:04 -- common/autotest_common.sh@168 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:05.548 12:34:04 -- common/autotest_common.sh@170 -- # : 0 00:06:05.548 12:34:04 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:05.548 12:34:04 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:05.548 12:34:04 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:05.548 12:34:04 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:05.548 12:34:04 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:05.548 12:34:04 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:05.548 12:34:04 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:05.548 12:34:04 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:05.548 12:34:04 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:05.548 12:34:04 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:05.548 12:34:04 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:05.548 12:34:04 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:05.548 12:34:04 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:05.548 12:34:04 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:05.548 12:34:04 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:05.548 12:34:04 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:05.548 12:34:04 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:05.548 12:34:04 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:05.548 12:34:04 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:05.548 12:34:04 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:05.548 12:34:04 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:05.548 12:34:04 -- common/autotest_common.sh@199 -- # cat 00:06:05.548 12:34:04 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:05.548 12:34:04 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:05.548 12:34:04 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:05.548 12:34:04 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:05.548 12:34:04 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:05.549 12:34:04 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:05.549 12:34:04 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:05.549 12:34:04 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:05.549 12:34:04 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:05.549 12:34:04 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:05.549 12:34:04 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:05.549 12:34:04 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:05.549 12:34:04 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:05.549 12:34:04 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:05.549 12:34:04 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:05.549 12:34:04 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:05.549 12:34:04 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:05.549 12:34:04 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:05.549 12:34:04 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:05.549 12:34:04 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:05.549 12:34:04 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:05.549 12:34:04 -- common/autotest_common.sh@252 -- # valgrind= 00:06:05.549 12:34:04 -- common/autotest_common.sh@258 -- # uname -s 00:06:05.549 12:34:04 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:05.549 12:34:04 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:05.549 12:34:04 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:05.549 12:34:04 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:05.549 12:34:04 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:05.549 12:34:04 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:05.549 12:34:04 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:05.549 12:34:04 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:06:05.549 12:34:04 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:05.549 12:34:04 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:05.549 12:34:04 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:05.549 12:34:04 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:05.549 12:34:04 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:05.549 12:34:04 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:05.549 12:34:04 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:05.549 12:34:04 -- common/autotest_common.sh@307 -- # [[ -z 1072835 ]] 00:06:05.549 12:34:04 -- common/autotest_common.sh@307 -- # kill -0 1072835 00:06:05.549 12:34:04 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:05.549 12:34:04 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:05.549 12:34:04 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:05.549 12:34:04 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:05.549 12:34:04 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:05.549 12:34:04 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:05.549 12:34:04 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:05.549 12:34:04 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:05.549 12:34:04 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.SRmQL4 00:06:05.549 12:34:04 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:05.549 12:34:04 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:05.549 12:34:04 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:05.549 12:34:04 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.SRmQL4/tests/target /tmp/spdk.SRmQL4 00:06:05.549 12:34:04 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:05.549 12:34:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:05.549 12:34:04 -- common/autotest_common.sh@316 -- # df -T 00:06:05.549 12:34:04 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:05.549 12:34:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:05.549 12:34:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:05.549 12:34:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:05.549 12:34:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:05.549 12:34:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:05.549 12:34:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:05.549 12:34:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:05.807 12:34:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:05.807 12:34:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=996499456 00:06:05.807 12:34:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:05.808 12:34:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287930368 00:06:05.808 12:34:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=48443219968 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994729472 00:06:05.808 12:34:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=13551509504 00:06:05.808 12:34:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=30994751488 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997364736 00:06:05.808 12:34:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:06:05.808 12:34:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=12389969920 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398948352 00:06:05.808 12:34:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=8978432 00:06:05.808 12:34:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996586496 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997364736 00:06:05.808 12:34:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=778240 00:06:05.808 12:34:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:05.808 12:34:04 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199468032 00:06:05.808 12:34:04 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199472128 00:06:05.808 12:34:04 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:05.808 12:34:04 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:05.808 12:34:04 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:05.808 * Looking for test storage... 00:06:05.808 12:34:04 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:05.808 12:34:04 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:05.808 12:34:04 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.808 12:34:04 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:05.808 12:34:04 -- common/autotest_common.sh@361 -- # mount=/ 00:06:05.808 12:34:04 -- common/autotest_common.sh@363 -- # target_space=48443219968 00:06:05.808 12:34:04 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:05.808 12:34:04 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:05.808 12:34:04 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:05.808 12:34:04 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:05.808 12:34:04 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:05.808 12:34:04 -- common/autotest_common.sh@370 -- # new_size=15766102016 00:06:05.808 12:34:04 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:05.808 12:34:04 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.808 12:34:04 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.808 12:34:04 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.808 12:34:04 -- common/autotest_common.sh@378 -- # return 0 00:06:05.808 12:34:04 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:05.808 12:34:04 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:05.808 12:34:04 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:05.808 12:34:04 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:05.808 12:34:04 -- common/autotest_common.sh@1673 -- # true 00:06:05.808 12:34:04 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:05.808 12:34:04 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:05.808 12:34:04 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:05.808 12:34:04 -- common/autotest_common.sh@27 -- # exec 00:06:05.808 12:34:04 -- common/autotest_common.sh@29 -- # exec 00:06:05.808 12:34:04 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:05.808 12:34:04 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:05.808 12:34:04 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:05.808 12:34:04 -- common/autotest_common.sh@18 -- # set -x 00:06:05.808 12:34:04 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.808 12:34:04 -- nvmf/common.sh@7 -- # uname -s 00:06:05.808 12:34:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.808 12:34:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.808 12:34:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.808 12:34:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.808 12:34:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.808 12:34:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.808 12:34:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.808 12:34:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.808 12:34:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.808 12:34:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.808 12:34:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:05.808 12:34:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:05.808 12:34:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.808 12:34:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.808 12:34:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.808 12:34:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.808 12:34:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.808 12:34:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.808 12:34:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.808 12:34:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.808 12:34:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.808 12:34:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.808 12:34:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.808 12:34:04 -- paths/export.sh@5 -- # export PATH 00:06:05.808 12:34:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.808 12:34:04 -- nvmf/common.sh@47 -- # : 0 00:06:05.808 12:34:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.808 12:34:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.808 12:34:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.808 12:34:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.808 12:34:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.808 12:34:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.808 12:34:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.808 12:34:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.808 12:34:04 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:05.808 12:34:04 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:05.808 12:34:04 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:05.808 12:34:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:05.808 12:34:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.808 12:34:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:05.808 12:34:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:05.808 12:34:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:05.808 12:34:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.808 12:34:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:05.808 12:34:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.808 12:34:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:05.808 12:34:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:05.808 12:34:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:05.808 12:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:08.336 12:34:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:08.336 12:34:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:08.336 12:34:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:08.336 12:34:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:08.336 12:34:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:08.336 12:34:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:08.336 12:34:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:08.336 12:34:07 -- nvmf/common.sh@295 -- # net_devs=() 00:06:08.336 12:34:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:08.336 12:34:07 -- nvmf/common.sh@296 -- # e810=() 00:06:08.336 12:34:07 -- nvmf/common.sh@296 -- # local -ga e810 00:06:08.336 12:34:07 -- nvmf/common.sh@297 -- # x722=() 00:06:08.336 12:34:07 -- nvmf/common.sh@297 -- # local -ga x722 00:06:08.336 12:34:07 -- nvmf/common.sh@298 -- # mlx=() 00:06:08.336 12:34:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:08.336 12:34:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:08.336 12:34:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:08.337 12:34:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:08.337 12:34:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:08.337 12:34:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:08.337 12:34:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:08.337 12:34:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:08.337 12:34:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:06:08.337 Found 0000:82:00.0 (0x8086 - 0x159b) 00:06:08.337 12:34:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:08.337 12:34:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:06:08.337 Found 0000:82:00.1 (0x8086 - 0x159b) 00:06:08.337 12:34:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:08.337 12:34:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:08.337 12:34:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.337 12:34:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:08.337 12:34:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.337 12:34:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:06:08.337 Found net devices under 0000:82:00.0: cvl_0_0 00:06:08.337 12:34:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.337 12:34:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:08.337 12:34:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.337 12:34:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:08.337 12:34:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.337 12:34:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:06:08.337 Found net devices under 0000:82:00.1: cvl_0_1 00:06:08.337 12:34:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.337 12:34:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:08.337 12:34:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:08.337 12:34:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:08.337 12:34:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:08.337 12:34:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.337 12:34:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:08.337 12:34:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:08.337 12:34:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:08.337 12:34:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:08.337 12:34:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:08.337 12:34:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:08.337 12:34:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.337 12:34:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:08.337 12:34:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:08.337 12:34:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:08.337 12:34:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:08.337 12:34:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:08.337 12:34:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:08.337 12:34:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:08.337 12:34:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:08.337 12:34:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:08.337 12:34:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:08.337 12:34:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:08.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:06:08.337 00:06:08.337 --- 10.0.0.2 ping statistics --- 00:06:08.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.337 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:06:08.337 12:34:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:08.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:06:08.337 00:06:08.337 --- 10.0.0.1 ping statistics --- 00:06:08.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.337 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:06:08.337 12:34:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.337 12:34:07 -- nvmf/common.sh@411 -- # return 0 00:06:08.337 12:34:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:08.337 12:34:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.337 12:34:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:08.337 12:34:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.337 12:34:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:08.337 12:34:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:08.337 12:34:07 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:08.337 12:34:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:08.337 12:34:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.337 12:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.596 ************************************ 00:06:08.596 START TEST nvmf_filesystem_no_in_capsule 00:06:08.596 ************************************ 00:06:08.596 12:34:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:08.596 12:34:07 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:08.596 12:34:07 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:08.596 12:34:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:08.596 12:34:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:08.596 12:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.596 12:34:07 -- nvmf/common.sh@470 -- # nvmfpid=1074878 00:06:08.596 12:34:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:08.596 12:34:07 -- nvmf/common.sh@471 -- # waitforlisten 1074878 00:06:08.596 12:34:07 -- common/autotest_common.sh@817 -- # '[' -z 1074878 ']' 00:06:08.596 12:34:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.596 12:34:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.596 12:34:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.596 12:34:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.596 12:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.596 [2024-04-16 12:34:07.457557] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:06:08.596 [2024-04-16 12:34:07.457675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.596 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.596 [2024-04-16 12:34:07.533720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.596 [2024-04-16 12:34:07.643808] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.596 [2024-04-16 12:34:07.643867] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.596 [2024-04-16 12:34:07.643900] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.596 [2024-04-16 12:34:07.643912] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.596 [2024-04-16 12:34:07.643922] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.596 [2024-04-16 12:34:07.643971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.596 [2024-04-16 12:34:07.644034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.596 [2024-04-16 12:34:07.644098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.596 [2024-04-16 12:34:07.644101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.528 12:34:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.528 12:34:08 -- common/autotest_common.sh@850 -- # return 0 00:06:09.528 12:34:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:09.528 12:34:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:09.528 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.528 12:34:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.528 12:34:08 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:09.528 12:34:08 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:09.528 12:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.528 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.528 [2024-04-16 12:34:08.481386] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.528 12:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.528 12:34:08 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:09.528 12:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.528 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.785 Malloc1 00:06:09.785 12:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.785 12:34:08 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:09.785 12:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.785 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.785 12:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.785 12:34:08 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:09.785 12:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.785 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.785 12:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.785 12:34:08 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.785 12:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.785 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.785 [2024-04-16 12:34:08.663249] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.785 12:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.785 12:34:08 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:09.785 12:34:08 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:09.785 12:34:08 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:09.785 12:34:08 -- common/autotest_common.sh@1366 -- # local bs 00:06:09.785 12:34:08 -- common/autotest_common.sh@1367 -- # local nb 00:06:09.785 12:34:08 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:09.785 12:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.785 12:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.785 12:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.785 12:34:08 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:09.785 { 00:06:09.785 "name": "Malloc1", 00:06:09.785 "aliases": [ 00:06:09.785 "5db28b01-ac37-4499-b38a-493cf9bb67fa" 00:06:09.785 ], 00:06:09.785 "product_name": "Malloc disk", 00:06:09.785 "block_size": 512, 00:06:09.785 "num_blocks": 1048576, 00:06:09.785 "uuid": "5db28b01-ac37-4499-b38a-493cf9bb67fa", 00:06:09.785 "assigned_rate_limits": { 00:06:09.785 "rw_ios_per_sec": 0, 00:06:09.785 "rw_mbytes_per_sec": 0, 00:06:09.785 "r_mbytes_per_sec": 0, 00:06:09.785 "w_mbytes_per_sec": 0 00:06:09.785 }, 00:06:09.785 "claimed": true, 00:06:09.785 "claim_type": "exclusive_write", 00:06:09.785 "zoned": false, 00:06:09.785 "supported_io_types": { 00:06:09.785 "read": true, 00:06:09.785 "write": true, 00:06:09.785 "unmap": true, 00:06:09.785 "write_zeroes": true, 00:06:09.785 "flush": true, 00:06:09.785 "reset": true, 00:06:09.785 "compare": false, 00:06:09.785 "compare_and_write": false, 00:06:09.785 "abort": true, 00:06:09.785 "nvme_admin": false, 00:06:09.785 "nvme_io": false 00:06:09.785 }, 00:06:09.785 "memory_domains": [ 00:06:09.785 { 00:06:09.785 "dma_device_id": "system", 00:06:09.785 "dma_device_type": 1 00:06:09.785 }, 00:06:09.785 { 00:06:09.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.785 "dma_device_type": 2 00:06:09.785 } 00:06:09.785 ], 00:06:09.785 "driver_specific": {} 00:06:09.785 } 00:06:09.786 ]' 00:06:09.786 12:34:08 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:09.786 12:34:08 -- common/autotest_common.sh@1369 -- # bs=512 00:06:09.786 12:34:08 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:09.786 12:34:08 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:09.786 12:34:08 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:09.786 12:34:08 -- common/autotest_common.sh@1374 -- # echo 512 00:06:09.786 12:34:08 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:09.786 12:34:08 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:10.716 12:34:09 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:10.716 12:34:09 -- common/autotest_common.sh@1184 -- # local i=0 00:06:10.716 12:34:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:10.716 12:34:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:10.716 12:34:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:12.614 12:34:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:12.614 12:34:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:12.614 12:34:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:12.614 12:34:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:12.614 12:34:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:12.614 12:34:11 -- common/autotest_common.sh@1194 -- # return 0 00:06:12.614 12:34:11 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:12.614 12:34:11 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:12.614 12:34:11 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:12.614 12:34:11 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:12.614 12:34:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:12.614 12:34:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:12.615 12:34:11 -- setup/common.sh@80 -- # echo 536870912 00:06:12.615 12:34:11 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:12.615 12:34:11 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:12.615 12:34:11 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:12.615 12:34:11 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:12.871 12:34:11 -- target/filesystem.sh@69 -- # partprobe 00:06:13.434 12:34:12 -- target/filesystem.sh@70 -- # sleep 1 00:06:14.365 12:34:13 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:14.365 12:34:13 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:14.365 12:34:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:14.365 12:34:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.365 12:34:13 -- common/autotest_common.sh@10 -- # set +x 00:06:14.365 ************************************ 00:06:14.365 START TEST filesystem_ext4 00:06:14.365 ************************************ 00:06:14.365 12:34:13 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:14.365 12:34:13 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:14.365 12:34:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:14.365 12:34:13 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:14.365 12:34:13 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:14.365 12:34:13 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:14.365 12:34:13 -- common/autotest_common.sh@914 -- # local i=0 00:06:14.365 12:34:13 -- common/autotest_common.sh@915 -- # local force 00:06:14.365 12:34:13 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:14.365 12:34:13 -- common/autotest_common.sh@918 -- # force=-F 00:06:14.365 12:34:13 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:14.365 mke2fs 1.46.5 (30-Dec-2021) 00:06:14.622 Discarding device blocks: 0/522240 done 00:06:14.622 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:14.622 Filesystem UUID: 9c9feea4-4098-4b0b-8403-ce4d93ed5f59 00:06:14.622 Superblock backups stored on blocks: 00:06:14.622 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:14.622 00:06:14.622 Allocating group tables: 0/64 done 00:06:14.622 Writing inode tables: 0/64 done 00:06:14.879 Creating journal (8192 blocks): done 00:06:15.703 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:15.703 00:06:15.703 12:34:14 -- common/autotest_common.sh@931 -- # return 0 00:06:15.703 12:34:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:15.960 12:34:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:15.960 12:34:14 -- target/filesystem.sh@25 -- # sync 00:06:15.960 12:34:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:15.960 12:34:14 -- target/filesystem.sh@27 -- # sync 00:06:15.960 12:34:14 -- target/filesystem.sh@29 -- # i=0 00:06:15.960 12:34:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:15.960 12:34:14 -- target/filesystem.sh@37 -- # kill -0 1074878 00:06:15.960 12:34:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:15.960 12:34:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:15.960 12:34:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:15.960 12:34:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:15.960 00:06:15.960 real 0m1.501s 00:06:15.960 user 0m0.015s 00:06:15.960 sys 0m0.058s 00:06:15.960 12:34:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.960 12:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:15.960 ************************************ 00:06:15.960 END TEST filesystem_ext4 00:06:15.960 ************************************ 00:06:15.960 12:34:14 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:15.960 12:34:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:15.960 12:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.960 12:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:15.960 ************************************ 00:06:15.960 START TEST filesystem_btrfs 00:06:15.960 ************************************ 00:06:15.960 12:34:15 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:15.960 12:34:15 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:15.960 12:34:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:15.960 12:34:15 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:15.960 12:34:15 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:15.960 12:34:15 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:15.960 12:34:15 -- common/autotest_common.sh@914 -- # local i=0 00:06:15.960 12:34:15 -- common/autotest_common.sh@915 -- # local force 00:06:15.960 12:34:15 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:15.960 12:34:15 -- common/autotest_common.sh@920 -- # force=-f 00:06:15.960 12:34:15 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:16.217 btrfs-progs v6.6.2 00:06:16.217 See https://btrfs.readthedocs.io for more information. 00:06:16.217 00:06:16.217 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:16.217 NOTE: several default settings have changed in version 5.15, please make sure 00:06:16.217 this does not affect your deployments: 00:06:16.217 - DUP for metadata (-m dup) 00:06:16.217 - enabled no-holes (-O no-holes) 00:06:16.217 - enabled free-space-tree (-R free-space-tree) 00:06:16.217 00:06:16.217 Label: (null) 00:06:16.217 UUID: 2c48af15-8ffa-4fdc-9c91-20843fc5b32f 00:06:16.217 Node size: 16384 00:06:16.217 Sector size: 4096 00:06:16.217 Filesystem size: 510.00MiB 00:06:16.217 Block group profiles: 00:06:16.217 Data: single 8.00MiB 00:06:16.217 Metadata: DUP 32.00MiB 00:06:16.217 System: DUP 8.00MiB 00:06:16.217 SSD detected: yes 00:06:16.217 Zoned device: no 00:06:16.217 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:16.217 Runtime features: free-space-tree 00:06:16.217 Checksum: crc32c 00:06:16.217 Number of devices: 1 00:06:16.217 Devices: 00:06:16.217 ID SIZE PATH 00:06:16.217 1 510.00MiB /dev/nvme0n1p1 00:06:16.217 00:06:16.217 12:34:15 -- common/autotest_common.sh@931 -- # return 0 00:06:16.217 12:34:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:16.474 12:34:15 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:16.474 12:34:15 -- target/filesystem.sh@25 -- # sync 00:06:16.474 12:34:15 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:16.474 12:34:15 -- target/filesystem.sh@27 -- # sync 00:06:16.474 12:34:15 -- target/filesystem.sh@29 -- # i=0 00:06:16.474 12:34:15 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:16.474 12:34:15 -- target/filesystem.sh@37 -- # kill -0 1074878 00:06:16.474 12:34:15 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:16.474 12:34:15 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:16.474 12:34:15 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:16.474 12:34:15 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:16.474 00:06:16.474 real 0m0.487s 00:06:16.474 user 0m0.015s 00:06:16.474 sys 0m0.115s 00:06:16.474 12:34:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.474 12:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:16.474 ************************************ 00:06:16.474 END TEST filesystem_btrfs 00:06:16.474 ************************************ 00:06:16.474 12:34:15 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:16.474 12:34:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:16.474 12:34:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.474 12:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:16.731 ************************************ 00:06:16.731 START TEST filesystem_xfs 00:06:16.731 ************************************ 00:06:16.731 12:34:15 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:16.731 12:34:15 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:16.731 12:34:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:16.731 12:34:15 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:16.731 12:34:15 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:16.731 12:34:15 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:16.731 12:34:15 -- common/autotest_common.sh@914 -- # local i=0 00:06:16.731 12:34:15 -- common/autotest_common.sh@915 -- # local force 00:06:16.731 12:34:15 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:16.731 12:34:15 -- common/autotest_common.sh@920 -- # force=-f 00:06:16.731 12:34:15 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:16.731 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:16.731 = sectsz=512 attr=2, projid32bit=1 00:06:16.731 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:16.731 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:16.731 data = bsize=4096 blocks=130560, imaxpct=25 00:06:16.731 = sunit=0 swidth=0 blks 00:06:16.731 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:16.731 log =internal log bsize=4096 blocks=16384, version=2 00:06:16.731 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:16.731 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:17.660 Discarding blocks...Done. 00:06:17.660 12:34:16 -- common/autotest_common.sh@931 -- # return 0 00:06:17.660 12:34:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:20.187 12:34:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:20.187 12:34:18 -- target/filesystem.sh@25 -- # sync 00:06:20.187 12:34:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:20.187 12:34:18 -- target/filesystem.sh@27 -- # sync 00:06:20.187 12:34:18 -- target/filesystem.sh@29 -- # i=0 00:06:20.187 12:34:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:20.187 12:34:19 -- target/filesystem.sh@37 -- # kill -0 1074878 00:06:20.187 12:34:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:20.187 12:34:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:20.187 12:34:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:20.187 12:34:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:20.187 00:06:20.187 real 0m3.417s 00:06:20.187 user 0m0.015s 00:06:20.187 sys 0m0.061s 00:06:20.187 12:34:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.187 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:20.187 ************************************ 00:06:20.187 END TEST filesystem_xfs 00:06:20.187 ************************************ 00:06:20.187 12:34:19 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:20.187 12:34:19 -- target/filesystem.sh@93 -- # sync 00:06:20.187 12:34:19 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:20.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:20.187 12:34:19 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:20.187 12:34:19 -- common/autotest_common.sh@1205 -- # local i=0 00:06:20.187 12:34:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:20.187 12:34:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:20.187 12:34:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:20.187 12:34:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:20.187 12:34:19 -- common/autotest_common.sh@1217 -- # return 0 00:06:20.187 12:34:19 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:20.187 12:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.187 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:20.187 12:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.187 12:34:19 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:20.187 12:34:19 -- target/filesystem.sh@101 -- # killprocess 1074878 00:06:20.187 12:34:19 -- common/autotest_common.sh@936 -- # '[' -z 1074878 ']' 00:06:20.187 12:34:19 -- common/autotest_common.sh@940 -- # kill -0 1074878 00:06:20.187 12:34:19 -- common/autotest_common.sh@941 -- # uname 00:06:20.187 12:34:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.187 12:34:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1074878 00:06:20.187 12:34:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.187 12:34:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.187 12:34:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1074878' 00:06:20.187 killing process with pid 1074878 00:06:20.187 12:34:19 -- common/autotest_common.sh@955 -- # kill 1074878 00:06:20.187 12:34:19 -- common/autotest_common.sh@960 -- # wait 1074878 00:06:20.752 12:34:19 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:20.752 00:06:20.752 real 0m12.280s 00:06:20.752 user 0m47.338s 00:06:20.752 sys 0m1.887s 00:06:20.752 12:34:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.752 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 ************************************ 00:06:20.752 END TEST nvmf_filesystem_no_in_capsule 00:06:20.752 ************************************ 00:06:20.752 12:34:19 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:20.752 12:34:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:20.752 12:34:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.752 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 ************************************ 00:06:20.752 START TEST nvmf_filesystem_in_capsule 00:06:20.752 ************************************ 00:06:20.752 12:34:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:20.752 12:34:19 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:20.752 12:34:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:20.752 12:34:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:20.752 12:34:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:20.752 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:21.010 12:34:19 -- nvmf/common.sh@470 -- # nvmfpid=1076598 00:06:21.010 12:34:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:21.010 12:34:19 -- nvmf/common.sh@471 -- # waitforlisten 1076598 00:06:21.010 12:34:19 -- common/autotest_common.sh@817 -- # '[' -z 1076598 ']' 00:06:21.010 12:34:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.010 12:34:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:21.010 12:34:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.010 12:34:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:21.010 12:34:19 -- common/autotest_common.sh@10 -- # set +x 00:06:21.010 [2024-04-16 12:34:19.869459] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:06:21.010 [2024-04-16 12:34:19.869535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.010 [2024-04-16 12:34:19.948098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.010 [2024-04-16 12:34:20.069435] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.010 [2024-04-16 12:34:20.069510] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.010 [2024-04-16 12:34:20.069527] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.010 [2024-04-16 12:34:20.069542] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.010 [2024-04-16 12:34:20.069553] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.010 [2024-04-16 12:34:20.069645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.010 [2024-04-16 12:34:20.069708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.010 [2024-04-16 12:34:20.069757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.010 [2024-04-16 12:34:20.069761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.943 12:34:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.943 12:34:20 -- common/autotest_common.sh@850 -- # return 0 00:06:21.943 12:34:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:21.943 12:34:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:21.943 12:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:21.943 12:34:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.943 12:34:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:21.943 12:34:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:21.943 12:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.943 12:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:21.943 [2024-04-16 12:34:20.873623] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.943 12:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.943 12:34:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:21.943 12:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.943 12:34:20 -- common/autotest_common.sh@10 -- # set +x 00:06:22.200 Malloc1 00:06:22.200 12:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:22.201 12:34:21 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:22.201 12:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:22.201 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:22.201 12:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:22.201 12:34:21 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:22.201 12:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:22.201 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:22.201 12:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:22.201 12:34:21 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.201 12:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:22.201 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:22.201 [2024-04-16 12:34:21.054090] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.201 12:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:22.201 12:34:21 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:22.201 12:34:21 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:22.201 12:34:21 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:22.201 12:34:21 -- common/autotest_common.sh@1366 -- # local bs 00:06:22.201 12:34:21 -- common/autotest_common.sh@1367 -- # local nb 00:06:22.201 12:34:21 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:22.201 12:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:22.201 12:34:21 -- common/autotest_common.sh@10 -- # set +x 00:06:22.201 12:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:22.201 12:34:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:22.201 { 00:06:22.201 "name": "Malloc1", 00:06:22.201 "aliases": [ 00:06:22.201 "c6b41e8c-bda8-4ef6-841d-f587dee0644f" 00:06:22.201 ], 00:06:22.201 "product_name": "Malloc disk", 00:06:22.201 "block_size": 512, 00:06:22.201 "num_blocks": 1048576, 00:06:22.201 "uuid": "c6b41e8c-bda8-4ef6-841d-f587dee0644f", 00:06:22.201 "assigned_rate_limits": { 00:06:22.201 "rw_ios_per_sec": 0, 00:06:22.201 "rw_mbytes_per_sec": 0, 00:06:22.201 "r_mbytes_per_sec": 0, 00:06:22.201 "w_mbytes_per_sec": 0 00:06:22.201 }, 00:06:22.201 "claimed": true, 00:06:22.201 "claim_type": "exclusive_write", 00:06:22.201 "zoned": false, 00:06:22.201 "supported_io_types": { 00:06:22.201 "read": true, 00:06:22.201 "write": true, 00:06:22.201 "unmap": true, 00:06:22.201 "write_zeroes": true, 00:06:22.201 "flush": true, 00:06:22.201 "reset": true, 00:06:22.201 "compare": false, 00:06:22.201 "compare_and_write": false, 00:06:22.201 "abort": true, 00:06:22.201 "nvme_admin": false, 00:06:22.201 "nvme_io": false 00:06:22.201 }, 00:06:22.201 "memory_domains": [ 00:06:22.201 { 00:06:22.201 "dma_device_id": "system", 00:06:22.201 "dma_device_type": 1 00:06:22.201 }, 00:06:22.201 { 00:06:22.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.201 "dma_device_type": 2 00:06:22.201 } 00:06:22.201 ], 00:06:22.201 "driver_specific": {} 00:06:22.201 } 00:06:22.201 ]' 00:06:22.201 12:34:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:22.201 12:34:21 -- common/autotest_common.sh@1369 -- # bs=512 00:06:22.201 12:34:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:22.201 12:34:21 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:22.201 12:34:21 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:22.201 12:34:21 -- common/autotest_common.sh@1374 -- # echo 512 00:06:22.201 12:34:21 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:22.201 12:34:21 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:22.776 12:34:21 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:22.776 12:34:21 -- common/autotest_common.sh@1184 -- # local i=0 00:06:22.776 12:34:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:22.776 12:34:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:22.776 12:34:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:25.307 12:34:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:25.307 12:34:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:25.307 12:34:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:25.307 12:34:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:25.307 12:34:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:25.307 12:34:23 -- common/autotest_common.sh@1194 -- # return 0 00:06:25.307 12:34:23 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:25.307 12:34:23 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:25.307 12:34:23 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:25.307 12:34:23 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:25.307 12:34:23 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:25.307 12:34:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:25.307 12:34:23 -- setup/common.sh@80 -- # echo 536870912 00:06:25.307 12:34:23 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:25.307 12:34:23 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:25.307 12:34:23 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:25.307 12:34:23 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:25.307 12:34:24 -- target/filesystem.sh@69 -- # partprobe 00:06:26.240 12:34:25 -- target/filesystem.sh@70 -- # sleep 1 00:06:27.611 12:34:26 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:27.611 12:34:26 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:27.611 12:34:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:27.611 12:34:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.611 12:34:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.611 ************************************ 00:06:27.611 START TEST filesystem_in_capsule_ext4 00:06:27.611 ************************************ 00:06:27.611 12:34:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:27.611 12:34:26 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:27.611 12:34:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:27.611 12:34:26 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:27.611 12:34:26 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:27.611 12:34:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:27.611 12:34:26 -- common/autotest_common.sh@914 -- # local i=0 00:06:27.611 12:34:26 -- common/autotest_common.sh@915 -- # local force 00:06:27.611 12:34:26 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:27.611 12:34:26 -- common/autotest_common.sh@918 -- # force=-F 00:06:27.611 12:34:26 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:27.611 mke2fs 1.46.5 (30-Dec-2021) 00:06:27.612 Discarding device blocks: 0/522240 done 00:06:27.612 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:27.612 Filesystem UUID: 2df05c5e-3195-453e-9574-10389843e67f 00:06:27.612 Superblock backups stored on blocks: 00:06:27.612 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:27.612 00:06:27.612 Allocating group tables: 0/64 done 00:06:27.612 Writing inode tables: 0/64 done 00:06:27.612 Creating journal (8192 blocks): done 00:06:27.612 Writing superblocks and filesystem accounting information: 0/64 done 00:06:27.612 00:06:27.612 12:34:26 -- common/autotest_common.sh@931 -- # return 0 00:06:27.612 12:34:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:28.175 12:34:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:28.175 12:34:27 -- target/filesystem.sh@25 -- # sync 00:06:28.175 12:34:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:28.175 12:34:27 -- target/filesystem.sh@27 -- # sync 00:06:28.175 12:34:27 -- target/filesystem.sh@29 -- # i=0 00:06:28.175 12:34:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:28.175 12:34:27 -- target/filesystem.sh@37 -- # kill -0 1076598 00:06:28.175 12:34:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:28.175 12:34:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:28.433 12:34:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:28.433 12:34:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:28.433 00:06:28.433 real 0m0.897s 00:06:28.433 user 0m0.010s 00:06:28.433 sys 0m0.062s 00:06:28.433 12:34:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.433 12:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:28.433 ************************************ 00:06:28.433 END TEST filesystem_in_capsule_ext4 00:06:28.433 ************************************ 00:06:28.433 12:34:27 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:28.433 12:34:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:28.433 12:34:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.433 12:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:28.433 ************************************ 00:06:28.433 START TEST filesystem_in_capsule_btrfs 00:06:28.433 ************************************ 00:06:28.433 12:34:27 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:28.433 12:34:27 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:28.433 12:34:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:28.433 12:34:27 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:28.433 12:34:27 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:28.433 12:34:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:28.433 12:34:27 -- common/autotest_common.sh@914 -- # local i=0 00:06:28.433 12:34:27 -- common/autotest_common.sh@915 -- # local force 00:06:28.433 12:34:27 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:28.433 12:34:27 -- common/autotest_common.sh@920 -- # force=-f 00:06:28.433 12:34:27 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:28.690 btrfs-progs v6.6.2 00:06:28.690 See https://btrfs.readthedocs.io for more information. 00:06:28.690 00:06:28.690 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:28.690 NOTE: several default settings have changed in version 5.15, please make sure 00:06:28.690 this does not affect your deployments: 00:06:28.690 - DUP for metadata (-m dup) 00:06:28.690 - enabled no-holes (-O no-holes) 00:06:28.690 - enabled free-space-tree (-R free-space-tree) 00:06:28.690 00:06:28.690 Label: (null) 00:06:28.690 UUID: 122ef347-77f1-43f2-a271-8301e72695e6 00:06:28.690 Node size: 16384 00:06:28.690 Sector size: 4096 00:06:28.690 Filesystem size: 510.00MiB 00:06:28.690 Block group profiles: 00:06:28.690 Data: single 8.00MiB 00:06:28.690 Metadata: DUP 32.00MiB 00:06:28.690 System: DUP 8.00MiB 00:06:28.690 SSD detected: yes 00:06:28.690 Zoned device: no 00:06:28.690 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:28.690 Runtime features: free-space-tree 00:06:28.690 Checksum: crc32c 00:06:28.690 Number of devices: 1 00:06:28.690 Devices: 00:06:28.690 ID SIZE PATH 00:06:28.690 1 510.00MiB /dev/nvme0n1p1 00:06:28.690 00:06:28.690 12:34:27 -- common/autotest_common.sh@931 -- # return 0 00:06:28.690 12:34:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:29.621 12:34:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:29.621 12:34:28 -- target/filesystem.sh@25 -- # sync 00:06:29.621 12:34:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:29.621 12:34:28 -- target/filesystem.sh@27 -- # sync 00:06:29.621 12:34:28 -- target/filesystem.sh@29 -- # i=0 00:06:29.621 12:34:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:29.621 12:34:28 -- target/filesystem.sh@37 -- # kill -0 1076598 00:06:29.621 12:34:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:29.621 12:34:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:29.621 12:34:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:29.621 12:34:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:29.621 00:06:29.621 real 0m1.033s 00:06:29.621 user 0m0.007s 00:06:29.621 sys 0m0.120s 00:06:29.621 12:34:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.621 12:34:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.621 ************************************ 00:06:29.621 END TEST filesystem_in_capsule_btrfs 00:06:29.621 ************************************ 00:06:29.621 12:34:28 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:29.621 12:34:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:29.621 12:34:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.621 12:34:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.621 ************************************ 00:06:29.621 START TEST filesystem_in_capsule_xfs 00:06:29.621 ************************************ 00:06:29.621 12:34:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:29.621 12:34:28 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:29.621 12:34:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:29.621 12:34:28 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:29.621 12:34:28 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:29.621 12:34:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:29.621 12:34:28 -- common/autotest_common.sh@914 -- # local i=0 00:06:29.621 12:34:28 -- common/autotest_common.sh@915 -- # local force 00:06:29.621 12:34:28 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:29.621 12:34:28 -- common/autotest_common.sh@920 -- # force=-f 00:06:29.621 12:34:28 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:29.621 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:29.621 = sectsz=512 attr=2, projid32bit=1 00:06:29.621 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:29.621 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:29.621 data = bsize=4096 blocks=130560, imaxpct=25 00:06:29.621 = sunit=0 swidth=0 blks 00:06:29.621 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:29.621 log =internal log bsize=4096 blocks=16384, version=2 00:06:29.621 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:29.621 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:30.553 Discarding blocks...Done. 00:06:30.553 12:34:29 -- common/autotest_common.sh@931 -- # return 0 00:06:30.553 12:34:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:32.450 12:34:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:32.450 12:34:31 -- target/filesystem.sh@25 -- # sync 00:06:32.450 12:34:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:32.450 12:34:31 -- target/filesystem.sh@27 -- # sync 00:06:32.450 12:34:31 -- target/filesystem.sh@29 -- # i=0 00:06:32.450 12:34:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:32.450 12:34:31 -- target/filesystem.sh@37 -- # kill -0 1076598 00:06:32.450 12:34:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:32.450 12:34:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:32.450 12:34:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:32.450 12:34:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:32.450 00:06:32.450 real 0m2.819s 00:06:32.450 user 0m0.013s 00:06:32.450 sys 0m0.064s 00:06:32.450 12:34:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.450 12:34:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.450 ************************************ 00:06:32.450 END TEST filesystem_in_capsule_xfs 00:06:32.450 ************************************ 00:06:32.450 12:34:31 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:32.707 12:34:31 -- target/filesystem.sh@93 -- # sync 00:06:32.707 12:34:31 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:32.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:32.707 12:34:31 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:32.707 12:34:31 -- common/autotest_common.sh@1205 -- # local i=0 00:06:32.707 12:34:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:32.707 12:34:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:32.707 12:34:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:32.707 12:34:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:33.046 12:34:31 -- common/autotest_common.sh@1217 -- # return 0 00:06:33.046 12:34:31 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:33.046 12:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:33.046 12:34:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.046 12:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:33.046 12:34:31 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:33.046 12:34:31 -- target/filesystem.sh@101 -- # killprocess 1076598 00:06:33.046 12:34:31 -- common/autotest_common.sh@936 -- # '[' -z 1076598 ']' 00:06:33.046 12:34:31 -- common/autotest_common.sh@940 -- # kill -0 1076598 00:06:33.046 12:34:31 -- common/autotest_common.sh@941 -- # uname 00:06:33.046 12:34:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.046 12:34:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1076598 00:06:33.046 12:34:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.046 12:34:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.046 12:34:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1076598' 00:06:33.046 killing process with pid 1076598 00:06:33.046 12:34:31 -- common/autotest_common.sh@955 -- # kill 1076598 00:06:33.046 12:34:31 -- common/autotest_common.sh@960 -- # wait 1076598 00:06:33.305 12:34:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:33.305 00:06:33.305 real 0m12.469s 00:06:33.305 user 0m47.834s 00:06:33.305 sys 0m2.067s 00:06:33.305 12:34:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.305 12:34:32 -- common/autotest_common.sh@10 -- # set +x 00:06:33.305 ************************************ 00:06:33.305 END TEST nvmf_filesystem_in_capsule 00:06:33.305 ************************************ 00:06:33.305 12:34:32 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:33.305 12:34:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:33.305 12:34:32 -- nvmf/common.sh@117 -- # sync 00:06:33.305 12:34:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:33.305 12:34:32 -- nvmf/common.sh@120 -- # set +e 00:06:33.305 12:34:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:33.305 12:34:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:33.305 rmmod nvme_tcp 00:06:33.305 rmmod nvme_fabrics 00:06:33.305 rmmod nvme_keyring 00:06:33.564 12:34:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:33.564 12:34:32 -- nvmf/common.sh@124 -- # set -e 00:06:33.564 12:34:32 -- nvmf/common.sh@125 -- # return 0 00:06:33.564 12:34:32 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:06:33.564 12:34:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:33.564 12:34:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:33.564 12:34:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:33.564 12:34:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:33.564 12:34:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:33.564 12:34:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.564 12:34:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:33.564 12:34:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.469 12:34:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:35.469 00:06:35.469 real 0m29.952s 00:06:35.469 user 1m36.349s 00:06:35.469 sys 0m5.963s 00:06:35.469 12:34:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.469 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:06:35.469 ************************************ 00:06:35.469 END TEST nvmf_filesystem 00:06:35.469 ************************************ 00:06:35.469 12:34:34 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:35.469 12:34:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:35.469 12:34:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.469 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 ************************************ 00:06:35.758 START TEST nvmf_discovery 00:06:35.758 ************************************ 00:06:35.758 12:34:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:35.758 * Looking for test storage... 00:06:35.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.758 12:34:34 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.758 12:34:34 -- nvmf/common.sh@7 -- # uname -s 00:06:35.758 12:34:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.758 12:34:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.758 12:34:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.758 12:34:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.758 12:34:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.758 12:34:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.758 12:34:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.758 12:34:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.758 12:34:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.758 12:34:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.758 12:34:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:35.758 12:34:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:35.758 12:34:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.758 12:34:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.758 12:34:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.758 12:34:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.758 12:34:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.758 12:34:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.758 12:34:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.758 12:34:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.758 12:34:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.758 12:34:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.758 12:34:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.758 12:34:34 -- paths/export.sh@5 -- # export PATH 00:06:35.758 12:34:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.758 12:34:34 -- nvmf/common.sh@47 -- # : 0 00:06:35.758 12:34:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.758 12:34:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.758 12:34:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.758 12:34:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.758 12:34:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.758 12:34:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.758 12:34:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.758 12:34:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.758 12:34:34 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:35.758 12:34:34 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:35.758 12:34:34 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:35.758 12:34:34 -- target/discovery.sh@15 -- # hash nvme 00:06:35.758 12:34:34 -- target/discovery.sh@20 -- # nvmftestinit 00:06:35.758 12:34:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:35.758 12:34:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.758 12:34:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:35.758 12:34:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:35.758 12:34:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:35.758 12:34:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.758 12:34:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.758 12:34:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.758 12:34:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:35.758 12:34:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:35.758 12:34:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:35.758 12:34:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.287 12:34:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:38.287 12:34:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:38.287 12:34:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:38.287 12:34:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:38.287 12:34:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:38.287 12:34:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:38.287 12:34:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:38.287 12:34:37 -- nvmf/common.sh@295 -- # net_devs=() 00:06:38.287 12:34:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:38.287 12:34:37 -- nvmf/common.sh@296 -- # e810=() 00:06:38.287 12:34:37 -- nvmf/common.sh@296 -- # local -ga e810 00:06:38.287 12:34:37 -- nvmf/common.sh@297 -- # x722=() 00:06:38.287 12:34:37 -- nvmf/common.sh@297 -- # local -ga x722 00:06:38.287 12:34:37 -- nvmf/common.sh@298 -- # mlx=() 00:06:38.287 12:34:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:38.287 12:34:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.287 12:34:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:38.287 12:34:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:38.287 12:34:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:38.287 12:34:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.287 12:34:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:06:38.287 Found 0000:82:00.0 (0x8086 - 0x159b) 00:06:38.287 12:34:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.287 12:34:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:06:38.287 Found 0000:82:00.1 (0x8086 - 0x159b) 00:06:38.287 12:34:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:38.287 12:34:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.287 12:34:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.287 12:34:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:38.287 12:34:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.287 12:34:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:06:38.287 Found net devices under 0000:82:00.0: cvl_0_0 00:06:38.287 12:34:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.287 12:34:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.287 12:34:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.287 12:34:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:38.287 12:34:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.287 12:34:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:06:38.287 Found net devices under 0000:82:00.1: cvl_0_1 00:06:38.287 12:34:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.287 12:34:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:38.287 12:34:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:38.287 12:34:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:38.287 12:34:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.287 12:34:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.287 12:34:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.287 12:34:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:38.287 12:34:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.287 12:34:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.287 12:34:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:38.287 12:34:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.287 12:34:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.287 12:34:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:38.287 12:34:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:38.287 12:34:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.287 12:34:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.287 12:34:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.287 12:34:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.287 12:34:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:38.287 12:34:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.287 12:34:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.287 12:34:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.287 12:34:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:38.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:06:38.287 00:06:38.287 --- 10.0.0.2 ping statistics --- 00:06:38.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.287 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:06:38.287 12:34:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:06:38.287 00:06:38.287 --- 10.0.0.1 ping statistics --- 00:06:38.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.287 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:06:38.287 12:34:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.287 12:34:37 -- nvmf/common.sh@411 -- # return 0 00:06:38.287 12:34:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:38.287 12:34:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.287 12:34:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:38.287 12:34:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.287 12:34:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:38.287 12:34:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:38.287 12:34:37 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:38.287 12:34:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:38.287 12:34:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:38.287 12:34:37 -- common/autotest_common.sh@10 -- # set +x 00:06:38.287 12:34:37 -- nvmf/common.sh@470 -- # nvmfpid=1080529 00:06:38.287 12:34:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:38.287 12:34:37 -- nvmf/common.sh@471 -- # waitforlisten 1080529 00:06:38.287 12:34:37 -- common/autotest_common.sh@817 -- # '[' -z 1080529 ']' 00:06:38.287 12:34:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.287 12:34:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:38.287 12:34:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.287 12:34:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:38.287 12:34:37 -- common/autotest_common.sh@10 -- # set +x 00:06:38.287 [2024-04-16 12:34:37.320770] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:06:38.287 [2024-04-16 12:34:37.320840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.546 [2024-04-16 12:34:37.400903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.546 [2024-04-16 12:34:37.520123] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.546 [2024-04-16 12:34:37.520198] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.546 [2024-04-16 12:34:37.520215] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.546 [2024-04-16 12:34:37.520229] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.546 [2024-04-16 12:34:37.520241] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.546 [2024-04-16 12:34:37.520326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.546 [2024-04-16 12:34:37.520377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.546 [2024-04-16 12:34:37.520431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.546 [2024-04-16 12:34:37.520434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.478 12:34:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:39.478 12:34:38 -- common/autotest_common.sh@850 -- # return 0 00:06:39.478 12:34:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:39.478 12:34:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.478 12:34:38 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 [2024-04-16 12:34:38.334746] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@26 -- # seq 1 4 00:06:39.478 12:34:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.478 12:34:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 Null1 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 [2024-04-16 12:34:38.374993] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.478 12:34:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 Null2 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.478 12:34:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 Null3 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.478 12:34:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 Null4 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.478 12:34:38 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:39.478 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.478 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.478 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.479 12:34:38 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:06:39.736 00:06:39.736 Discovery Log Number of Records 6, Generation counter 6 00:06:39.736 =====Discovery Log Entry 0====== 00:06:39.736 trtype: tcp 00:06:39.736 adrfam: ipv4 00:06:39.736 subtype: current discovery subsystem 00:06:39.736 treq: not required 00:06:39.736 portid: 0 00:06:39.736 trsvcid: 4420 00:06:39.736 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:39.736 traddr: 10.0.0.2 00:06:39.736 eflags: explicit discovery connections, duplicate discovery information 00:06:39.736 sectype: none 00:06:39.736 =====Discovery Log Entry 1====== 00:06:39.736 trtype: tcp 00:06:39.736 adrfam: ipv4 00:06:39.736 subtype: nvme subsystem 00:06:39.736 treq: not required 00:06:39.736 portid: 0 00:06:39.736 trsvcid: 4420 00:06:39.736 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:39.736 traddr: 10.0.0.2 00:06:39.736 eflags: none 00:06:39.736 sectype: none 00:06:39.736 =====Discovery Log Entry 2====== 00:06:39.736 trtype: tcp 00:06:39.736 adrfam: ipv4 00:06:39.736 subtype: nvme subsystem 00:06:39.736 treq: not required 00:06:39.736 portid: 0 00:06:39.736 trsvcid: 4420 00:06:39.736 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:39.736 traddr: 10.0.0.2 00:06:39.736 eflags: none 00:06:39.736 sectype: none 00:06:39.736 =====Discovery Log Entry 3====== 00:06:39.736 trtype: tcp 00:06:39.736 adrfam: ipv4 00:06:39.736 subtype: nvme subsystem 00:06:39.736 treq: not required 00:06:39.736 portid: 0 00:06:39.736 trsvcid: 4420 00:06:39.736 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:39.736 traddr: 10.0.0.2 00:06:39.736 eflags: none 00:06:39.736 sectype: none 00:06:39.736 =====Discovery Log Entry 4====== 00:06:39.736 trtype: tcp 00:06:39.736 adrfam: ipv4 00:06:39.736 subtype: nvme subsystem 00:06:39.736 treq: not required 00:06:39.736 portid: 0 00:06:39.736 trsvcid: 4420 00:06:39.736 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:39.736 traddr: 10.0.0.2 00:06:39.736 eflags: none 00:06:39.736 sectype: none 00:06:39.736 =====Discovery Log Entry 5====== 00:06:39.736 trtype: tcp 00:06:39.736 adrfam: ipv4 00:06:39.736 subtype: discovery subsystem referral 00:06:39.736 treq: not required 00:06:39.736 portid: 0 00:06:39.736 trsvcid: 4430 00:06:39.736 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:39.736 traddr: 10.0.0.2 00:06:39.736 eflags: none 00:06:39.736 sectype: none 00:06:39.736 12:34:38 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:39.736 Perform nvmf subsystem discovery via RPC 00:06:39.736 12:34:38 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:39.736 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.736 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.736 [2024-04-16 12:34:38.579435] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:39.736 [ 00:06:39.736 { 00:06:39.736 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:39.736 "subtype": "Discovery", 00:06:39.736 "listen_addresses": [ 00:06:39.736 { 00:06:39.736 "transport": "TCP", 00:06:39.736 "trtype": "TCP", 00:06:39.736 "adrfam": "IPv4", 00:06:39.736 "traddr": "10.0.0.2", 00:06:39.736 "trsvcid": "4420" 00:06:39.736 } 00:06:39.736 ], 00:06:39.736 "allow_any_host": true, 00:06:39.736 "hosts": [] 00:06:39.736 }, 00:06:39.736 { 00:06:39.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:39.737 "subtype": "NVMe", 00:06:39.737 "listen_addresses": [ 00:06:39.737 { 00:06:39.737 "transport": "TCP", 00:06:39.737 "trtype": "TCP", 00:06:39.737 "adrfam": "IPv4", 00:06:39.737 "traddr": "10.0.0.2", 00:06:39.737 "trsvcid": "4420" 00:06:39.737 } 00:06:39.737 ], 00:06:39.737 "allow_any_host": true, 00:06:39.737 "hosts": [], 00:06:39.737 "serial_number": "SPDK00000000000001", 00:06:39.737 "model_number": "SPDK bdev Controller", 00:06:39.737 "max_namespaces": 32, 00:06:39.737 "min_cntlid": 1, 00:06:39.737 "max_cntlid": 65519, 00:06:39.737 "namespaces": [ 00:06:39.737 { 00:06:39.737 "nsid": 1, 00:06:39.737 "bdev_name": "Null1", 00:06:39.737 "name": "Null1", 00:06:39.737 "nguid": "8EE37F52F9E543ABA3B7CD3008CABF49", 00:06:39.737 "uuid": "8ee37f52-f9e5-43ab-a3b7-cd3008cabf49" 00:06:39.737 } 00:06:39.737 ] 00:06:39.737 }, 00:06:39.737 { 00:06:39.737 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:39.737 "subtype": "NVMe", 00:06:39.737 "listen_addresses": [ 00:06:39.737 { 00:06:39.737 "transport": "TCP", 00:06:39.737 "trtype": "TCP", 00:06:39.737 "adrfam": "IPv4", 00:06:39.737 "traddr": "10.0.0.2", 00:06:39.737 "trsvcid": "4420" 00:06:39.737 } 00:06:39.737 ], 00:06:39.737 "allow_any_host": true, 00:06:39.737 "hosts": [], 00:06:39.737 "serial_number": "SPDK00000000000002", 00:06:39.737 "model_number": "SPDK bdev Controller", 00:06:39.737 "max_namespaces": 32, 00:06:39.737 "min_cntlid": 1, 00:06:39.737 "max_cntlid": 65519, 00:06:39.737 "namespaces": [ 00:06:39.737 { 00:06:39.737 "nsid": 1, 00:06:39.737 "bdev_name": "Null2", 00:06:39.737 "name": "Null2", 00:06:39.737 "nguid": "32FC4E2AC9F340DCA1B923B05B54CF9B", 00:06:39.737 "uuid": "32fc4e2a-c9f3-40dc-a1b9-23b05b54cf9b" 00:06:39.737 } 00:06:39.737 ] 00:06:39.737 }, 00:06:39.737 { 00:06:39.737 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:39.737 "subtype": "NVMe", 00:06:39.737 "listen_addresses": [ 00:06:39.737 { 00:06:39.737 "transport": "TCP", 00:06:39.737 "trtype": "TCP", 00:06:39.737 "adrfam": "IPv4", 00:06:39.737 "traddr": "10.0.0.2", 00:06:39.737 "trsvcid": "4420" 00:06:39.737 } 00:06:39.737 ], 00:06:39.737 "allow_any_host": true, 00:06:39.737 "hosts": [], 00:06:39.737 "serial_number": "SPDK00000000000003", 00:06:39.737 "model_number": "SPDK bdev Controller", 00:06:39.737 "max_namespaces": 32, 00:06:39.737 "min_cntlid": 1, 00:06:39.737 "max_cntlid": 65519, 00:06:39.737 "namespaces": [ 00:06:39.737 { 00:06:39.737 "nsid": 1, 00:06:39.737 "bdev_name": "Null3", 00:06:39.737 "name": "Null3", 00:06:39.737 "nguid": "2692127174B84DFA81313FC15AAD8CDA", 00:06:39.737 "uuid": "26921271-74b8-4dfa-8131-3fc15aad8cda" 00:06:39.737 } 00:06:39.737 ] 00:06:39.737 }, 00:06:39.737 { 00:06:39.737 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:39.737 "subtype": "NVMe", 00:06:39.737 "listen_addresses": [ 00:06:39.737 { 00:06:39.737 "transport": "TCP", 00:06:39.737 "trtype": "TCP", 00:06:39.737 "adrfam": "IPv4", 00:06:39.737 "traddr": "10.0.0.2", 00:06:39.737 "trsvcid": "4420" 00:06:39.737 } 00:06:39.737 ], 00:06:39.737 "allow_any_host": true, 00:06:39.737 "hosts": [], 00:06:39.737 "serial_number": "SPDK00000000000004", 00:06:39.737 "model_number": "SPDK bdev Controller", 00:06:39.737 "max_namespaces": 32, 00:06:39.737 "min_cntlid": 1, 00:06:39.737 "max_cntlid": 65519, 00:06:39.737 "namespaces": [ 00:06:39.737 { 00:06:39.737 "nsid": 1, 00:06:39.737 "bdev_name": "Null4", 00:06:39.737 "name": "Null4", 00:06:39.737 "nguid": "34A6542E70704148A839ED5128A921CA", 00:06:39.737 "uuid": "34a6542e-7070-4148-a839-ed5128a921ca" 00:06:39.737 } 00:06:39.737 ] 00:06:39.737 } 00:06:39.737 ] 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@42 -- # seq 1 4 00:06:39.737 12:34:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:39.737 12:34:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:39.737 12:34:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:39.737 12:34:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:39.737 12:34:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:39.737 12:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.737 12:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.737 12:34:38 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:39.737 12:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.737 12:34:38 -- target/discovery.sh@49 -- # check_bdevs= 00:06:39.737 12:34:38 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:39.737 12:34:38 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:39.737 12:34:38 -- target/discovery.sh@57 -- # nvmftestfini 00:06:39.737 12:34:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:39.737 12:34:38 -- nvmf/common.sh@117 -- # sync 00:06:39.737 12:34:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:39.737 12:34:38 -- nvmf/common.sh@120 -- # set +e 00:06:39.737 12:34:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:39.737 12:34:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:39.737 rmmod nvme_tcp 00:06:39.737 rmmod nvme_fabrics 00:06:39.737 rmmod nvme_keyring 00:06:39.737 12:34:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:39.737 12:34:38 -- nvmf/common.sh@124 -- # set -e 00:06:39.737 12:34:38 -- nvmf/common.sh@125 -- # return 0 00:06:39.737 12:34:38 -- nvmf/common.sh@478 -- # '[' -n 1080529 ']' 00:06:39.737 12:34:38 -- nvmf/common.sh@479 -- # killprocess 1080529 00:06:39.737 12:34:38 -- common/autotest_common.sh@936 -- # '[' -z 1080529 ']' 00:06:39.737 12:34:38 -- common/autotest_common.sh@940 -- # kill -0 1080529 00:06:39.737 12:34:38 -- common/autotest_common.sh@941 -- # uname 00:06:39.737 12:34:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.737 12:34:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1080529 00:06:39.737 12:34:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.737 12:34:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.737 12:34:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1080529' 00:06:39.737 killing process with pid 1080529 00:06:39.737 12:34:38 -- common/autotest_common.sh@955 -- # kill 1080529 00:06:39.737 [2024-04-16 12:34:38.796306] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:39.737 12:34:38 -- common/autotest_common.sh@960 -- # wait 1080529 00:06:40.017 12:34:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:40.017 12:34:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:40.017 12:34:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:40.017 12:34:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:40.017 12:34:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:40.017 12:34:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.017 12:34:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.017 12:34:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.546 12:34:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:42.546 00:06:42.546 real 0m6.565s 00:06:42.546 user 0m7.235s 00:06:42.546 sys 0m2.223s 00:06:42.546 12:34:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.546 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:06:42.546 ************************************ 00:06:42.546 END TEST nvmf_discovery 00:06:42.546 ************************************ 00:06:42.546 12:34:41 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:42.546 12:34:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:42.546 12:34:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.546 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:06:42.546 ************************************ 00:06:42.546 START TEST nvmf_referrals 00:06:42.546 ************************************ 00:06:42.546 12:34:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:42.546 * Looking for test storage... 00:06:42.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.546 12:34:41 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.546 12:34:41 -- nvmf/common.sh@7 -- # uname -s 00:06:42.546 12:34:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.546 12:34:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.546 12:34:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.546 12:34:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.546 12:34:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.546 12:34:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.546 12:34:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.546 12:34:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.546 12:34:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.546 12:34:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.546 12:34:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:42.546 12:34:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:42.546 12:34:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.547 12:34:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.547 12:34:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.547 12:34:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.547 12:34:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.547 12:34:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.547 12:34:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.547 12:34:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.547 12:34:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.547 12:34:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.547 12:34:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.547 12:34:41 -- paths/export.sh@5 -- # export PATH 00:06:42.547 12:34:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.547 12:34:41 -- nvmf/common.sh@47 -- # : 0 00:06:42.547 12:34:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.547 12:34:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.547 12:34:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.547 12:34:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.547 12:34:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.547 12:34:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.547 12:34:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.547 12:34:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.547 12:34:41 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:42.547 12:34:41 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:42.547 12:34:41 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:42.547 12:34:41 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:42.547 12:34:41 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:42.547 12:34:41 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:42.547 12:34:41 -- target/referrals.sh@37 -- # nvmftestinit 00:06:42.547 12:34:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:42.547 12:34:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.547 12:34:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:42.547 12:34:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:42.547 12:34:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:42.547 12:34:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.547 12:34:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.547 12:34:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.547 12:34:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:42.547 12:34:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:42.547 12:34:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.547 12:34:41 -- common/autotest_common.sh@10 -- # set +x 00:06:45.076 12:34:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:45.076 12:34:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.076 12:34:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.076 12:34:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.076 12:34:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.076 12:34:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.076 12:34:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.076 12:34:43 -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.076 12:34:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.076 12:34:43 -- nvmf/common.sh@296 -- # e810=() 00:06:45.077 12:34:43 -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.077 12:34:43 -- nvmf/common.sh@297 -- # x722=() 00:06:45.077 12:34:43 -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.077 12:34:43 -- nvmf/common.sh@298 -- # mlx=() 00:06:45.077 12:34:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.077 12:34:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.077 12:34:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.077 12:34:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:45.077 12:34:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.077 12:34:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.077 12:34:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:06:45.077 Found 0000:82:00.0 (0x8086 - 0x159b) 00:06:45.077 12:34:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.077 12:34:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:06:45.077 Found 0000:82:00.1 (0x8086 - 0x159b) 00:06:45.077 12:34:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.077 12:34:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.077 12:34:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.077 12:34:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:45.077 12:34:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.077 12:34:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:06:45.077 Found net devices under 0000:82:00.0: cvl_0_0 00:06:45.077 12:34:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.077 12:34:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.077 12:34:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.077 12:34:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:45.077 12:34:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.077 12:34:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:06:45.077 Found net devices under 0000:82:00.1: cvl_0_1 00:06:45.077 12:34:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.077 12:34:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:45.077 12:34:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:45.077 12:34:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:45.077 12:34:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.077 12:34:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.077 12:34:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.077 12:34:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:45.077 12:34:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.077 12:34:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.077 12:34:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:45.077 12:34:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.077 12:34:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.077 12:34:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:45.077 12:34:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:45.077 12:34:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.077 12:34:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.077 12:34:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.077 12:34:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.077 12:34:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:45.077 12:34:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.077 12:34:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.077 12:34:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.077 12:34:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:45.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:06:45.077 00:06:45.077 --- 10.0.0.2 ping statistics --- 00:06:45.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.077 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:06:45.077 12:34:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:06:45.077 00:06:45.077 --- 10.0.0.1 ping statistics --- 00:06:45.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.077 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:06:45.077 12:34:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.077 12:34:43 -- nvmf/common.sh@411 -- # return 0 00:06:45.077 12:34:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:45.077 12:34:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.077 12:34:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:45.077 12:34:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.077 12:34:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:45.077 12:34:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:45.077 12:34:43 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:45.077 12:34:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:45.077 12:34:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:45.077 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:06:45.077 12:34:43 -- nvmf/common.sh@470 -- # nvmfpid=1083042 00:06:45.077 12:34:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:45.077 12:34:43 -- nvmf/common.sh@471 -- # waitforlisten 1083042 00:06:45.077 12:34:43 -- common/autotest_common.sh@817 -- # '[' -z 1083042 ']' 00:06:45.077 12:34:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.077 12:34:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.077 12:34:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.077 12:34:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.077 12:34:43 -- common/autotest_common.sh@10 -- # set +x 00:06:45.077 [2024-04-16 12:34:43.910452] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:06:45.077 [2024-04-16 12:34:43.910531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.077 [2024-04-16 12:34:43.990744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.077 [2024-04-16 12:34:44.103938] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.077 [2024-04-16 12:34:44.104003] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.077 [2024-04-16 12:34:44.104017] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.077 [2024-04-16 12:34:44.104030] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.077 [2024-04-16 12:34:44.104041] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.077 [2024-04-16 12:34:44.104198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.077 [2024-04-16 12:34:44.104255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.077 [2024-04-16 12:34:44.104332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.077 [2024-04-16 12:34:44.104335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.009 12:34:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:46.009 12:34:44 -- common/autotest_common.sh@850 -- # return 0 00:06:46.009 12:34:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:46.009 12:34:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:46.009 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.009 12:34:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.009 12:34:44 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:46.009 12:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.009 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.010 [2024-04-16 12:34:44.917501] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.010 12:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.010 12:34:44 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:46.010 12:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.010 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.010 [2024-04-16 12:34:44.929737] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:46.010 12:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.010 12:34:44 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:46.010 12:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.010 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.010 12:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.010 12:34:44 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:46.010 12:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.010 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.010 12:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.010 12:34:44 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:46.010 12:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.010 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.010 12:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.010 12:34:44 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.010 12:34:44 -- target/referrals.sh@48 -- # jq length 00:06:46.010 12:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.010 12:34:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.010 12:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.010 12:34:44 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:46.010 12:34:44 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:46.010 12:34:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:46.010 12:34:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.010 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.010 12:34:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:46.010 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.010 12:34:45 -- target/referrals.sh@21 -- # sort 00:06:46.010 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.010 12:34:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:46.010 12:34:45 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:46.010 12:34:45 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:46.010 12:34:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:46.010 12:34:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:46.010 12:34:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.010 12:34:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:46.010 12:34:45 -- target/referrals.sh@26 -- # sort 00:06:46.281 12:34:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:46.281 12:34:45 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:46.281 12:34:45 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:46.281 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.281 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.281 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.281 12:34:45 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:46.281 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.281 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.281 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.281 12:34:45 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:46.281 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.281 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.281 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.281 12:34:45 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.281 12:34:45 -- target/referrals.sh@56 -- # jq length 00:06:46.281 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.281 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.281 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.281 12:34:45 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:46.281 12:34:45 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:46.281 12:34:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:46.281 12:34:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:46.281 12:34:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.281 12:34:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:46.281 12:34:45 -- target/referrals.sh@26 -- # sort 00:06:46.542 12:34:45 -- target/referrals.sh@26 -- # echo 00:06:46.542 12:34:45 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:46.542 12:34:45 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:46.542 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.542 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.542 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.542 12:34:45 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:46.542 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.542 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.542 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.542 12:34:45 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:46.542 12:34:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:46.542 12:34:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.542 12:34:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:46.542 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.542 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.542 12:34:45 -- target/referrals.sh@21 -- # sort 00:06:46.542 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.542 12:34:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:46.542 12:34:45 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:46.542 12:34:45 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:46.542 12:34:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:46.542 12:34:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:46.542 12:34:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.543 12:34:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:46.543 12:34:45 -- target/referrals.sh@26 -- # sort 00:06:46.543 12:34:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:46.543 12:34:45 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:46.543 12:34:45 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:46.543 12:34:45 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:46.543 12:34:45 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:46.543 12:34:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.543 12:34:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:46.801 12:34:45 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:46.801 12:34:45 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:46.801 12:34:45 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:46.801 12:34:45 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:46.801 12:34:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.801 12:34:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:47.063 12:34:45 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:47.063 12:34:45 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:47.063 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:47.063 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.063 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:47.063 12:34:45 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:47.063 12:34:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:47.063 12:34:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:47.064 12:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:47.064 12:34:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:47.064 12:34:45 -- target/referrals.sh@21 -- # sort 00:06:47.064 12:34:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.064 12:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:47.064 12:34:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:47.064 12:34:45 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:47.064 12:34:45 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:47.064 12:34:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:47.064 12:34:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:47.064 12:34:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.064 12:34:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:47.064 12:34:45 -- target/referrals.sh@26 -- # sort 00:06:47.064 12:34:46 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:47.064 12:34:46 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:47.064 12:34:46 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:47.064 12:34:46 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:47.064 12:34:46 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:47.064 12:34:46 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.064 12:34:46 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:47.321 12:34:46 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:47.321 12:34:46 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:47.321 12:34:46 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:47.321 12:34:46 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:47.321 12:34:46 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.321 12:34:46 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:47.321 12:34:46 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:47.321 12:34:46 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:47.321 12:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:47.321 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.321 12:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:47.322 12:34:46 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:47.322 12:34:46 -- target/referrals.sh@82 -- # jq length 00:06:47.322 12:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:47.322 12:34:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.322 12:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:47.579 12:34:46 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:47.579 12:34:46 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:47.579 12:34:46 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:47.579 12:34:46 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:47.579 12:34:46 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:47.579 12:34:46 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:47.579 12:34:46 -- target/referrals.sh@26 -- # sort 00:06:47.579 12:34:46 -- target/referrals.sh@26 -- # echo 00:06:47.579 12:34:46 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:47.579 12:34:46 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:47.579 12:34:46 -- target/referrals.sh@86 -- # nvmftestfini 00:06:47.579 12:34:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:47.579 12:34:46 -- nvmf/common.sh@117 -- # sync 00:06:47.579 12:34:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:47.579 12:34:46 -- nvmf/common.sh@120 -- # set +e 00:06:47.579 12:34:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:47.579 12:34:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:47.579 rmmod nvme_tcp 00:06:47.579 rmmod nvme_fabrics 00:06:47.579 rmmod nvme_keyring 00:06:47.579 12:34:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:47.579 12:34:46 -- nvmf/common.sh@124 -- # set -e 00:06:47.579 12:34:46 -- nvmf/common.sh@125 -- # return 0 00:06:47.579 12:34:46 -- nvmf/common.sh@478 -- # '[' -n 1083042 ']' 00:06:47.579 12:34:46 -- nvmf/common.sh@479 -- # killprocess 1083042 00:06:47.579 12:34:46 -- common/autotest_common.sh@936 -- # '[' -z 1083042 ']' 00:06:47.579 12:34:46 -- common/autotest_common.sh@940 -- # kill -0 1083042 00:06:47.579 12:34:46 -- common/autotest_common.sh@941 -- # uname 00:06:47.579 12:34:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.579 12:34:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1083042 00:06:47.579 12:34:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:47.579 12:34:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:47.579 12:34:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1083042' 00:06:47.579 killing process with pid 1083042 00:06:47.579 12:34:46 -- common/autotest_common.sh@955 -- # kill 1083042 00:06:47.579 12:34:46 -- common/autotest_common.sh@960 -- # wait 1083042 00:06:47.838 12:34:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:47.838 12:34:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:47.838 12:34:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:47.838 12:34:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:47.838 12:34:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:47.838 12:34:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.838 12:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.838 12:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.391 12:34:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:50.391 00:06:50.391 real 0m7.642s 00:06:50.391 user 0m12.524s 00:06:50.391 sys 0m2.427s 00:06:50.391 12:34:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.391 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.391 ************************************ 00:06:50.391 END TEST nvmf_referrals 00:06:50.391 ************************************ 00:06:50.391 12:34:48 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:50.391 12:34:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:50.391 12:34:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.391 12:34:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.391 ************************************ 00:06:50.391 START TEST nvmf_connect_disconnect 00:06:50.391 ************************************ 00:06:50.391 12:34:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:50.391 * Looking for test storage... 00:06:50.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.392 12:34:49 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.392 12:34:49 -- nvmf/common.sh@7 -- # uname -s 00:06:50.392 12:34:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.392 12:34:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.392 12:34:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.392 12:34:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.392 12:34:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.392 12:34:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.392 12:34:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.392 12:34:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.392 12:34:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.392 12:34:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.392 12:34:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:06:50.392 12:34:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:06:50.392 12:34:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.392 12:34:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.392 12:34:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.392 12:34:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.392 12:34:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.392 12:34:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.392 12:34:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.392 12:34:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.392 12:34:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.392 12:34:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.392 12:34:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.392 12:34:49 -- paths/export.sh@5 -- # export PATH 00:06:50.392 12:34:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.392 12:34:49 -- nvmf/common.sh@47 -- # : 0 00:06:50.392 12:34:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.392 12:34:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.392 12:34:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.392 12:34:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.392 12:34:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.392 12:34:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.392 12:34:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.392 12:34:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.392 12:34:49 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:50.392 12:34:49 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:50.392 12:34:49 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:50.392 12:34:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:50.392 12:34:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.392 12:34:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:50.392 12:34:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:50.392 12:34:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:50.392 12:34:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.392 12:34:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.392 12:34:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.392 12:34:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:50.392 12:34:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:50.392 12:34:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:50.392 12:34:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.925 12:34:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:52.925 12:34:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:52.925 12:34:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:52.925 12:34:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:52.925 12:34:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:52.925 12:34:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:52.925 12:34:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:52.925 12:34:51 -- nvmf/common.sh@295 -- # net_devs=() 00:06:52.925 12:34:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:52.925 12:34:51 -- nvmf/common.sh@296 -- # e810=() 00:06:52.925 12:34:51 -- nvmf/common.sh@296 -- # local -ga e810 00:06:52.925 12:34:51 -- nvmf/common.sh@297 -- # x722=() 00:06:52.925 12:34:51 -- nvmf/common.sh@297 -- # local -ga x722 00:06:52.925 12:34:51 -- nvmf/common.sh@298 -- # mlx=() 00:06:52.925 12:34:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:52.925 12:34:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.925 12:34:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:52.925 12:34:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:52.925 12:34:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:52.925 12:34:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.925 12:34:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:06:52.925 Found 0000:82:00.0 (0x8086 - 0x159b) 00:06:52.925 12:34:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.925 12:34:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:06:52.925 Found 0000:82:00.1 (0x8086 - 0x159b) 00:06:52.925 12:34:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:52.925 12:34:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.925 12:34:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.925 12:34:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:52.925 12:34:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.925 12:34:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:06:52.925 Found net devices under 0000:82:00.0: cvl_0_0 00:06:52.925 12:34:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.925 12:34:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.925 12:34:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.925 12:34:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:52.925 12:34:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.925 12:34:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:06:52.925 Found net devices under 0000:82:00.1: cvl_0_1 00:06:52.925 12:34:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.925 12:34:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:52.925 12:34:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:52.925 12:34:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:52.925 12:34:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.925 12:34:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.925 12:34:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.925 12:34:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:52.925 12:34:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.925 12:34:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.925 12:34:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:52.925 12:34:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.925 12:34:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.925 12:34:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:52.925 12:34:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:52.925 12:34:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.925 12:34:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.925 12:34:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.925 12:34:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.925 12:34:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:52.925 12:34:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.925 12:34:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.925 12:34:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.925 12:34:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:52.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:06:52.925 00:06:52.925 --- 10.0.0.2 ping statistics --- 00:06:52.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.925 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:06:52.925 12:34:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:06:52.925 00:06:52.925 --- 10.0.0.1 ping statistics --- 00:06:52.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.925 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:52.925 12:34:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.925 12:34:51 -- nvmf/common.sh@411 -- # return 0 00:06:52.925 12:34:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:52.925 12:34:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.925 12:34:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:52.925 12:34:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.925 12:34:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:52.925 12:34:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:52.925 12:34:51 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:52.925 12:34:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:52.925 12:34:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:52.925 12:34:51 -- common/autotest_common.sh@10 -- # set +x 00:06:52.925 12:34:51 -- nvmf/common.sh@470 -- # nvmfpid=1085769 00:06:52.925 12:34:51 -- nvmf/common.sh@471 -- # waitforlisten 1085769 00:06:52.925 12:34:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:52.925 12:34:51 -- common/autotest_common.sh@817 -- # '[' -z 1085769 ']' 00:06:52.925 12:34:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.925 12:34:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:52.925 12:34:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.925 12:34:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:52.925 12:34:51 -- common/autotest_common.sh@10 -- # set +x 00:06:52.925 [2024-04-16 12:34:51.669288] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:06:52.925 [2024-04-16 12:34:51.669367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.925 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.925 [2024-04-16 12:34:51.750102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.925 [2024-04-16 12:34:51.869036] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.925 [2024-04-16 12:34:51.869103] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.926 [2024-04-16 12:34:51.869120] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.926 [2024-04-16 12:34:51.869134] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.926 [2024-04-16 12:34:51.869146] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.926 [2024-04-16 12:34:51.869235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.926 [2024-04-16 12:34:51.869288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.926 [2024-04-16 12:34:51.869336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.926 [2024-04-16 12:34:51.869340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.859 12:34:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:53.859 12:34:52 -- common/autotest_common.sh@850 -- # return 0 00:06:53.859 12:34:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:53.859 12:34:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:53.859 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:53.859 12:34:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:53.859 12:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.859 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:53.859 [2024-04-16 12:34:52.641632] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.859 12:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:53.859 12:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.859 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:53.859 12:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:53.859 12:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.859 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:53.859 12:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:53.859 12:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.859 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:53.859 12:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.859 12:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:53.859 12:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:53.859 [2024-04-16 12:34:52.693663] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.859 12:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:53.859 12:34:52 -- target/connect_disconnect.sh@34 -- # set +x 00:06:56.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.520 12:35:06 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:07.520 12:35:06 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:07.520 12:35:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:07.520 12:35:06 -- nvmf/common.sh@117 -- # sync 00:07:07.520 12:35:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.520 12:35:06 -- nvmf/common.sh@120 -- # set +e 00:07:07.520 12:35:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.520 12:35:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.520 rmmod nvme_tcp 00:07:07.520 rmmod nvme_fabrics 00:07:07.520 rmmod nvme_keyring 00:07:07.520 12:35:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.520 12:35:06 -- nvmf/common.sh@124 -- # set -e 00:07:07.520 12:35:06 -- nvmf/common.sh@125 -- # return 0 00:07:07.520 12:35:06 -- nvmf/common.sh@478 -- # '[' -n 1085769 ']' 00:07:07.520 12:35:06 -- nvmf/common.sh@479 -- # killprocess 1085769 00:07:07.520 12:35:06 -- common/autotest_common.sh@936 -- # '[' -z 1085769 ']' 00:07:07.520 12:35:06 -- common/autotest_common.sh@940 -- # kill -0 1085769 00:07:07.520 12:35:06 -- common/autotest_common.sh@941 -- # uname 00:07:07.520 12:35:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.520 12:35:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1085769 00:07:07.520 12:35:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.520 12:35:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.520 12:35:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1085769' 00:07:07.520 killing process with pid 1085769 00:07:07.520 12:35:06 -- common/autotest_common.sh@955 -- # kill 1085769 00:07:07.521 12:35:06 -- common/autotest_common.sh@960 -- # wait 1085769 00:07:07.778 12:35:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:07.778 12:35:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:07.778 12:35:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:07.778 12:35:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.778 12:35:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.778 12:35:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.778 12:35:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.778 12:35:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.310 12:35:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.310 00:07:10.310 real 0m19.794s 00:07:10.310 user 0m59.030s 00:07:10.310 sys 0m3.722s 00:07:10.310 12:35:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.310 12:35:08 -- common/autotest_common.sh@10 -- # set +x 00:07:10.310 ************************************ 00:07:10.310 END TEST nvmf_connect_disconnect 00:07:10.310 ************************************ 00:07:10.310 12:35:08 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:10.310 12:35:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.310 12:35:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.310 12:35:08 -- common/autotest_common.sh@10 -- # set +x 00:07:10.310 ************************************ 00:07:10.310 START TEST nvmf_multitarget 00:07:10.310 ************************************ 00:07:10.310 12:35:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:10.310 * Looking for test storage... 00:07:10.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.310 12:35:08 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.310 12:35:08 -- nvmf/common.sh@7 -- # uname -s 00:07:10.310 12:35:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.310 12:35:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.310 12:35:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.310 12:35:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.310 12:35:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.310 12:35:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.310 12:35:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.310 12:35:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.310 12:35:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.310 12:35:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.310 12:35:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:10.310 12:35:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:10.310 12:35:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.310 12:35:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.310 12:35:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.310 12:35:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.310 12:35:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.310 12:35:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.310 12:35:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.310 12:35:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.310 12:35:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.311 12:35:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.311 12:35:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.311 12:35:08 -- paths/export.sh@5 -- # export PATH 00:07:10.311 12:35:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.311 12:35:08 -- nvmf/common.sh@47 -- # : 0 00:07:10.311 12:35:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.311 12:35:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.311 12:35:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.311 12:35:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.311 12:35:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.311 12:35:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.311 12:35:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.311 12:35:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.311 12:35:08 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:10.311 12:35:08 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:10.311 12:35:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:10.311 12:35:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.311 12:35:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:10.311 12:35:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:10.311 12:35:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:10.311 12:35:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.311 12:35:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.311 12:35:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.311 12:35:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:10.311 12:35:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:10.311 12:35:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.311 12:35:08 -- common/autotest_common.sh@10 -- # set +x 00:07:12.847 12:35:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:12.847 12:35:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.847 12:35:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.847 12:35:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.847 12:35:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.847 12:35:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.847 12:35:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.847 12:35:11 -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.847 12:35:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.847 12:35:11 -- nvmf/common.sh@296 -- # e810=() 00:07:12.847 12:35:11 -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.847 12:35:11 -- nvmf/common.sh@297 -- # x722=() 00:07:12.847 12:35:11 -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.847 12:35:11 -- nvmf/common.sh@298 -- # mlx=() 00:07:12.847 12:35:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.847 12:35:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.847 12:35:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.847 12:35:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.847 12:35:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.847 12:35:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.847 12:35:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:12.847 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:12.847 12:35:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.847 12:35:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:12.847 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:12.847 12:35:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.847 12:35:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.847 12:35:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.847 12:35:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:12.847 12:35:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.847 12:35:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:12.847 Found net devices under 0000:82:00.0: cvl_0_0 00:07:12.847 12:35:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.847 12:35:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.847 12:35:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.847 12:35:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:12.847 12:35:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.847 12:35:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:12.847 Found net devices under 0000:82:00.1: cvl_0_1 00:07:12.847 12:35:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.847 12:35:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:12.847 12:35:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:12.847 12:35:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:12.847 12:35:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.847 12:35:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.847 12:35:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.847 12:35:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:12.847 12:35:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.847 12:35:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.847 12:35:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:12.847 12:35:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.847 12:35:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.847 12:35:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:12.847 12:35:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:12.847 12:35:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.847 12:35:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.847 12:35:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.847 12:35:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.847 12:35:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:12.847 12:35:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.847 12:35:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.847 12:35:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.847 12:35:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:12.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:07:12.847 00:07:12.847 --- 10.0.0.2 ping statistics --- 00:07:12.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.847 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:07:12.847 12:35:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:12.847 00:07:12.847 --- 10.0.0.1 ping statistics --- 00:07:12.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.847 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:12.847 12:35:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.847 12:35:11 -- nvmf/common.sh@411 -- # return 0 00:07:12.847 12:35:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:12.847 12:35:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.847 12:35:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:12.847 12:35:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.847 12:35:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:12.847 12:35:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:12.847 12:35:11 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:12.847 12:35:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:12.847 12:35:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:12.847 12:35:11 -- common/autotest_common.sh@10 -- # set +x 00:07:12.847 12:35:11 -- nvmf/common.sh@470 -- # nvmfpid=1089844 00:07:12.847 12:35:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.847 12:35:11 -- nvmf/common.sh@471 -- # waitforlisten 1089844 00:07:12.847 12:35:11 -- common/autotest_common.sh@817 -- # '[' -z 1089844 ']' 00:07:12.847 12:35:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.847 12:35:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:12.847 12:35:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.847 12:35:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:12.847 12:35:11 -- common/autotest_common.sh@10 -- # set +x 00:07:12.847 [2024-04-16 12:35:11.757606] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:07:12.847 [2024-04-16 12:35:11.757692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.847 [2024-04-16 12:35:11.837116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.104 [2024-04-16 12:35:11.955958] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.105 [2024-04-16 12:35:11.956013] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.105 [2024-04-16 12:35:11.956028] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.105 [2024-04-16 12:35:11.956041] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.105 [2024-04-16 12:35:11.956059] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.105 [2024-04-16 12:35:11.956142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.105 [2024-04-16 12:35:11.956191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.105 [2024-04-16 12:35:11.956306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.105 [2024-04-16 12:35:11.956308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.667 12:35:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:13.667 12:35:12 -- common/autotest_common.sh@850 -- # return 0 00:07:13.667 12:35:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:13.667 12:35:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:13.667 12:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:13.924 12:35:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.924 12:35:12 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:13.924 12:35:12 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:13.924 12:35:12 -- target/multitarget.sh@21 -- # jq length 00:07:13.924 12:35:12 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:13.925 12:35:12 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:13.925 "nvmf_tgt_1" 00:07:14.182 12:35:12 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:14.182 "nvmf_tgt_2" 00:07:14.182 12:35:13 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:14.182 12:35:13 -- target/multitarget.sh@28 -- # jq length 00:07:14.182 12:35:13 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:14.182 12:35:13 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:14.439 true 00:07:14.439 12:35:13 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:14.439 true 00:07:14.439 12:35:13 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:14.439 12:35:13 -- target/multitarget.sh@35 -- # jq length 00:07:14.697 12:35:13 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:14.697 12:35:13 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:14.697 12:35:13 -- target/multitarget.sh@41 -- # nvmftestfini 00:07:14.697 12:35:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:14.697 12:35:13 -- nvmf/common.sh@117 -- # sync 00:07:14.697 12:35:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.697 12:35:13 -- nvmf/common.sh@120 -- # set +e 00:07:14.697 12:35:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.697 12:35:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.697 rmmod nvme_tcp 00:07:14.697 rmmod nvme_fabrics 00:07:14.697 rmmod nvme_keyring 00:07:14.697 12:35:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.697 12:35:13 -- nvmf/common.sh@124 -- # set -e 00:07:14.697 12:35:13 -- nvmf/common.sh@125 -- # return 0 00:07:14.697 12:35:13 -- nvmf/common.sh@478 -- # '[' -n 1089844 ']' 00:07:14.697 12:35:13 -- nvmf/common.sh@479 -- # killprocess 1089844 00:07:14.697 12:35:13 -- common/autotest_common.sh@936 -- # '[' -z 1089844 ']' 00:07:14.697 12:35:13 -- common/autotest_common.sh@940 -- # kill -0 1089844 00:07:14.697 12:35:13 -- common/autotest_common.sh@941 -- # uname 00:07:14.697 12:35:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.697 12:35:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1089844 00:07:14.697 12:35:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.697 12:35:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.697 12:35:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1089844' 00:07:14.697 killing process with pid 1089844 00:07:14.697 12:35:13 -- common/autotest_common.sh@955 -- # kill 1089844 00:07:14.697 12:35:13 -- common/autotest_common.sh@960 -- # wait 1089844 00:07:14.956 12:35:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:14.956 12:35:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:14.956 12:35:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:14.956 12:35:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.956 12:35:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.956 12:35:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.956 12:35:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.956 12:35:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.490 12:35:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.490 00:07:17.490 real 0m7.047s 00:07:17.490 user 0m9.516s 00:07:17.490 sys 0m2.409s 00:07:17.490 12:35:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.490 12:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.490 ************************************ 00:07:17.490 END TEST nvmf_multitarget 00:07:17.490 ************************************ 00:07:17.490 12:35:15 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:17.490 12:35:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.490 12:35:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.490 12:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.490 ************************************ 00:07:17.490 START TEST nvmf_rpc 00:07:17.490 ************************************ 00:07:17.490 12:35:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:17.490 * Looking for test storage... 00:07:17.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.490 12:35:16 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.490 12:35:16 -- nvmf/common.sh@7 -- # uname -s 00:07:17.490 12:35:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.490 12:35:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.490 12:35:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.490 12:35:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.490 12:35:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.490 12:35:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.490 12:35:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.490 12:35:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.490 12:35:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.490 12:35:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.490 12:35:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:17.490 12:35:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:17.490 12:35:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.490 12:35:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.490 12:35:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.490 12:35:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.490 12:35:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.490 12:35:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.490 12:35:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.490 12:35:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.490 12:35:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.490 12:35:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.490 12:35:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.490 12:35:16 -- paths/export.sh@5 -- # export PATH 00:07:17.490 12:35:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.490 12:35:16 -- nvmf/common.sh@47 -- # : 0 00:07:17.490 12:35:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.490 12:35:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.490 12:35:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.490 12:35:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.490 12:35:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.490 12:35:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.490 12:35:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.490 12:35:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.490 12:35:16 -- target/rpc.sh@11 -- # loops=5 00:07:17.490 12:35:16 -- target/rpc.sh@23 -- # nvmftestinit 00:07:17.490 12:35:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:17.490 12:35:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.490 12:35:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:17.490 12:35:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:17.490 12:35:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:17.490 12:35:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.490 12:35:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.490 12:35:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.490 12:35:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:17.490 12:35:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:17.490 12:35:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.490 12:35:16 -- common/autotest_common.sh@10 -- # set +x 00:07:20.020 12:35:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:20.020 12:35:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.020 12:35:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.020 12:35:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.020 12:35:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.020 12:35:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.020 12:35:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.020 12:35:18 -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.020 12:35:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.020 12:35:18 -- nvmf/common.sh@296 -- # e810=() 00:07:20.020 12:35:18 -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.020 12:35:18 -- nvmf/common.sh@297 -- # x722=() 00:07:20.020 12:35:18 -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.020 12:35:18 -- nvmf/common.sh@298 -- # mlx=() 00:07:20.020 12:35:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.020 12:35:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.020 12:35:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.020 12:35:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.020 12:35:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.020 12:35:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.020 12:35:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.020 12:35:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.021 12:35:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.021 12:35:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:20.021 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:20.021 12:35:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.021 12:35:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:20.021 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:20.021 12:35:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.021 12:35:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.021 12:35:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.021 12:35:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:20.021 12:35:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.021 12:35:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:20.021 Found net devices under 0000:82:00.0: cvl_0_0 00:07:20.021 12:35:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.021 12:35:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.021 12:35:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.021 12:35:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:20.021 12:35:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.021 12:35:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:20.021 Found net devices under 0000:82:00.1: cvl_0_1 00:07:20.021 12:35:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.021 12:35:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:20.021 12:35:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:20.021 12:35:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:20.021 12:35:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.021 12:35:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.021 12:35:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.021 12:35:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.021 12:35:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.021 12:35:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.021 12:35:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.021 12:35:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.021 12:35:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.021 12:35:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.021 12:35:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.021 12:35:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.021 12:35:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.021 12:35:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.021 12:35:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.021 12:35:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.021 12:35:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.021 12:35:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.021 12:35:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.021 12:35:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:20.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:07:20.021 00:07:20.021 --- 10.0.0.2 ping statistics --- 00:07:20.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.021 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:20.021 12:35:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:20.021 00:07:20.021 --- 10.0.0.1 ping statistics --- 00:07:20.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.021 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:20.021 12:35:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.021 12:35:18 -- nvmf/common.sh@411 -- # return 0 00:07:20.021 12:35:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:20.021 12:35:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.021 12:35:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:20.021 12:35:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.021 12:35:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:20.021 12:35:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:20.021 12:35:18 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:20.021 12:35:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:20.021 12:35:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:20.021 12:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.021 12:35:18 -- nvmf/common.sh@470 -- # nvmfpid=1092494 00:07:20.021 12:35:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.021 12:35:18 -- nvmf/common.sh@471 -- # waitforlisten 1092494 00:07:20.021 12:35:18 -- common/autotest_common.sh@817 -- # '[' -z 1092494 ']' 00:07:20.021 12:35:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.021 12:35:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:20.021 12:35:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.021 12:35:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:20.021 12:35:18 -- common/autotest_common.sh@10 -- # set +x 00:07:20.021 [2024-04-16 12:35:18.976076] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:07:20.021 [2024-04-16 12:35:18.976173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.021 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.021 [2024-04-16 12:35:19.065092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.279 [2024-04-16 12:35:19.184935] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.279 [2024-04-16 12:35:19.184996] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.279 [2024-04-16 12:35:19.185014] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.279 [2024-04-16 12:35:19.185029] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.279 [2024-04-16 12:35:19.185041] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.279 [2024-04-16 12:35:19.185122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.279 [2024-04-16 12:35:19.185173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.279 [2024-04-16 12:35:19.185222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.279 [2024-04-16 12:35:19.185225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.279 12:35:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:20.279 12:35:19 -- common/autotest_common.sh@850 -- # return 0 00:07:20.279 12:35:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:20.279 12:35:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:20.279 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.279 12:35:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.279 12:35:19 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:20.279 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.279 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.537 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.537 12:35:19 -- target/rpc.sh@26 -- # stats='{ 00:07:20.537 "tick_rate": 2700000000, 00:07:20.537 "poll_groups": [ 00:07:20.537 { 00:07:20.537 "name": "nvmf_tgt_poll_group_0", 00:07:20.537 "admin_qpairs": 0, 00:07:20.537 "io_qpairs": 0, 00:07:20.537 "current_admin_qpairs": 0, 00:07:20.537 "current_io_qpairs": 0, 00:07:20.537 "pending_bdev_io": 0, 00:07:20.537 "completed_nvme_io": 0, 00:07:20.537 "transports": [] 00:07:20.537 }, 00:07:20.537 { 00:07:20.537 "name": "nvmf_tgt_poll_group_1", 00:07:20.537 "admin_qpairs": 0, 00:07:20.537 "io_qpairs": 0, 00:07:20.537 "current_admin_qpairs": 0, 00:07:20.537 "current_io_qpairs": 0, 00:07:20.537 "pending_bdev_io": 0, 00:07:20.537 "completed_nvme_io": 0, 00:07:20.537 "transports": [] 00:07:20.537 }, 00:07:20.537 { 00:07:20.537 "name": "nvmf_tgt_poll_group_2", 00:07:20.537 "admin_qpairs": 0, 00:07:20.537 "io_qpairs": 0, 00:07:20.537 "current_admin_qpairs": 0, 00:07:20.537 "current_io_qpairs": 0, 00:07:20.537 "pending_bdev_io": 0, 00:07:20.537 "completed_nvme_io": 0, 00:07:20.537 "transports": [] 00:07:20.537 }, 00:07:20.537 { 00:07:20.537 "name": "nvmf_tgt_poll_group_3", 00:07:20.537 "admin_qpairs": 0, 00:07:20.537 "io_qpairs": 0, 00:07:20.537 "current_admin_qpairs": 0, 00:07:20.537 "current_io_qpairs": 0, 00:07:20.537 "pending_bdev_io": 0, 00:07:20.537 "completed_nvme_io": 0, 00:07:20.537 "transports": [] 00:07:20.537 } 00:07:20.537 ] 00:07:20.537 }' 00:07:20.537 12:35:19 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:20.537 12:35:19 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:20.537 12:35:19 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:20.537 12:35:19 -- target/rpc.sh@15 -- # wc -l 00:07:20.537 12:35:19 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:20.537 12:35:19 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:20.537 12:35:19 -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:20.537 12:35:19 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.537 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.537 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.537 [2024-04-16 12:35:19.444899] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.537 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.537 12:35:19 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:20.537 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.537 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.537 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.537 12:35:19 -- target/rpc.sh@33 -- # stats='{ 00:07:20.537 "tick_rate": 2700000000, 00:07:20.537 "poll_groups": [ 00:07:20.537 { 00:07:20.537 "name": "nvmf_tgt_poll_group_0", 00:07:20.537 "admin_qpairs": 0, 00:07:20.537 "io_qpairs": 0, 00:07:20.537 "current_admin_qpairs": 0, 00:07:20.537 "current_io_qpairs": 0, 00:07:20.537 "pending_bdev_io": 0, 00:07:20.537 "completed_nvme_io": 0, 00:07:20.537 "transports": [ 00:07:20.537 { 00:07:20.537 "trtype": "TCP" 00:07:20.537 } 00:07:20.537 ] 00:07:20.537 }, 00:07:20.537 { 00:07:20.537 "name": "nvmf_tgt_poll_group_1", 00:07:20.537 "admin_qpairs": 0, 00:07:20.537 "io_qpairs": 0, 00:07:20.537 "current_admin_qpairs": 0, 00:07:20.537 "current_io_qpairs": 0, 00:07:20.537 "pending_bdev_io": 0, 00:07:20.537 "completed_nvme_io": 0, 00:07:20.537 "transports": [ 00:07:20.537 { 00:07:20.537 "trtype": "TCP" 00:07:20.537 } 00:07:20.537 ] 00:07:20.537 }, 00:07:20.537 { 00:07:20.537 "name": "nvmf_tgt_poll_group_2", 00:07:20.537 "admin_qpairs": 0, 00:07:20.537 "io_qpairs": 0, 00:07:20.537 "current_admin_qpairs": 0, 00:07:20.537 "current_io_qpairs": 0, 00:07:20.537 "pending_bdev_io": 0, 00:07:20.537 "completed_nvme_io": 0, 00:07:20.537 "transports": [ 00:07:20.537 { 00:07:20.537 "trtype": "TCP" 00:07:20.537 } 00:07:20.537 ] 00:07:20.537 }, 00:07:20.537 { 00:07:20.537 "name": "nvmf_tgt_poll_group_3", 00:07:20.537 "admin_qpairs": 0, 00:07:20.537 "io_qpairs": 0, 00:07:20.537 "current_admin_qpairs": 0, 00:07:20.537 "current_io_qpairs": 0, 00:07:20.537 "pending_bdev_io": 0, 00:07:20.537 "completed_nvme_io": 0, 00:07:20.537 "transports": [ 00:07:20.537 { 00:07:20.537 "trtype": "TCP" 00:07:20.537 } 00:07:20.537 ] 00:07:20.537 } 00:07:20.537 ] 00:07:20.537 }' 00:07:20.537 12:35:19 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:20.537 12:35:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:20.537 12:35:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:20.537 12:35:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:20.537 12:35:19 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:20.537 12:35:19 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:20.537 12:35:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:20.537 12:35:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:20.537 12:35:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:20.537 12:35:19 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:20.538 12:35:19 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:20.538 12:35:19 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:20.538 12:35:19 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:20.538 12:35:19 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:20.538 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.538 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.538 Malloc1 00:07:20.538 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.538 12:35:19 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:20.538 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.538 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.538 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.538 12:35:19 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.538 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.538 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.538 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.538 12:35:19 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:20.538 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.538 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.538 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.538 12:35:19 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.538 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.538 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.538 [2024-04-16 12:35:19.606534] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.795 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.795 12:35:19 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:07:20.795 12:35:19 -- common/autotest_common.sh@638 -- # local es=0 00:07:20.795 12:35:19 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:07:20.795 12:35:19 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:20.795 12:35:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:20.795 12:35:19 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:20.795 12:35:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:20.795 12:35:19 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:20.795 12:35:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:20.795 12:35:19 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:20.795 12:35:19 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:20.795 12:35:19 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.2 -s 4420 00:07:20.795 [2024-04-16 12:35:19.629015] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:07:20.795 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:20.795 could not add new controller: failed to write to nvme-fabrics device 00:07:20.795 12:35:19 -- common/autotest_common.sh@641 -- # es=1 00:07:20.795 12:35:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:20.795 12:35:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:20.795 12:35:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:20.795 12:35:19 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:20.795 12:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:20.795 12:35:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.795 12:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:20.795 12:35:19 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.358 12:35:20 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.358 12:35:20 -- common/autotest_common.sh@1184 -- # local i=0 00:07:21.358 12:35:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.358 12:35:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:21.358 12:35:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:23.913 12:35:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:23.913 12:35:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:23.913 12:35:22 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.913 12:35:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:23.913 12:35:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.913 12:35:22 -- common/autotest_common.sh@1194 -- # return 0 00:07:23.913 12:35:22 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:23.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.913 12:35:22 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:23.913 12:35:22 -- common/autotest_common.sh@1205 -- # local i=0 00:07:23.913 12:35:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:23.913 12:35:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.913 12:35:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:23.913 12:35:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.913 12:35:22 -- common/autotest_common.sh@1217 -- # return 0 00:07:23.913 12:35:22 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:23.913 12:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.913 12:35:22 -- common/autotest_common.sh@10 -- # set +x 00:07:23.913 12:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.913 12:35:22 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.913 12:35:22 -- common/autotest_common.sh@638 -- # local es=0 00:07:23.913 12:35:22 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.913 12:35:22 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:23.913 12:35:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.913 12:35:22 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:23.913 12:35:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.913 12:35:22 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:23.913 12:35:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:23.913 12:35:22 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:23.913 12:35:22 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:23.913 12:35:22 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.913 [2024-04-16 12:35:22.492144] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:07:23.913 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:23.913 could not add new controller: failed to write to nvme-fabrics device 00:07:23.913 12:35:22 -- common/autotest_common.sh@641 -- # es=1 00:07:23.913 12:35:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:23.913 12:35:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:23.913 12:35:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:23.913 12:35:22 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:23.913 12:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.913 12:35:22 -- common/autotest_common.sh@10 -- # set +x 00:07:23.913 12:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.913 12:35:22 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.171 12:35:23 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.171 12:35:23 -- common/autotest_common.sh@1184 -- # local i=0 00:07:24.171 12:35:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.171 12:35:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:24.171 12:35:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:26.065 12:35:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:26.065 12:35:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:26.065 12:35:25 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.323 12:35:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:26.323 12:35:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.323 12:35:25 -- common/autotest_common.sh@1194 -- # return 0 00:07:26.323 12:35:25 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:26.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.324 12:35:25 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:26.324 12:35:25 -- common/autotest_common.sh@1205 -- # local i=0 00:07:26.324 12:35:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:26.324 12:35:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.324 12:35:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:26.324 12:35:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.324 12:35:25 -- common/autotest_common.sh@1217 -- # return 0 00:07:26.324 12:35:25 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.324 12:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.324 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:26.324 12:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.324 12:35:25 -- target/rpc.sh@81 -- # seq 1 5 00:07:26.324 12:35:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:26.324 12:35:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.324 12:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.324 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:26.324 12:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.324 12:35:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.324 12:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.324 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:26.324 [2024-04-16 12:35:25.278924] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.324 12:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.324 12:35:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:26.324 12:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.324 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:26.324 12:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.324 12:35:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.324 12:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.324 12:35:25 -- common/autotest_common.sh@10 -- # set +x 00:07:26.324 12:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.324 12:35:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.888 12:35:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.888 12:35:25 -- common/autotest_common.sh@1184 -- # local i=0 00:07:26.888 12:35:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.888 12:35:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:26.888 12:35:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:29.413 12:35:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:29.413 12:35:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:29.413 12:35:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.413 12:35:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:29.414 12:35:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.414 12:35:27 -- common/autotest_common.sh@1194 -- # return 0 00:07:29.414 12:35:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.414 12:35:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.414 12:35:27 -- common/autotest_common.sh@1205 -- # local i=0 00:07:29.414 12:35:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:29.414 12:35:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.414 12:35:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:29.414 12:35:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.414 12:35:27 -- common/autotest_common.sh@1217 -- # return 0 00:07:29.414 12:35:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.414 12:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.414 12:35:27 -- common/autotest_common.sh@10 -- # set +x 00:07:29.414 12:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.414 12:35:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.414 12:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.414 12:35:27 -- common/autotest_common.sh@10 -- # set +x 00:07:29.414 12:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.414 12:35:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:29.414 12:35:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.414 12:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.414 12:35:27 -- common/autotest_common.sh@10 -- # set +x 00:07:29.414 12:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.414 12:35:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.414 12:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.414 12:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:29.414 [2024-04-16 12:35:28.005482] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.414 12:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.414 12:35:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:29.414 12:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.414 12:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:29.414 12:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.414 12:35:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.414 12:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.414 12:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:29.414 12:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.414 12:35:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.670 12:35:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.670 12:35:28 -- common/autotest_common.sh@1184 -- # local i=0 00:07:29.670 12:35:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.670 12:35:28 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:29.671 12:35:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:32.196 12:35:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:32.196 12:35:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:32.196 12:35:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:32.196 12:35:30 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:32.196 12:35:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:32.196 12:35:30 -- common/autotest_common.sh@1194 -- # return 0 00:07:32.196 12:35:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.196 12:35:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.196 12:35:30 -- common/autotest_common.sh@1205 -- # local i=0 00:07:32.196 12:35:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:32.196 12:35:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.196 12:35:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:32.196 12:35:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.196 12:35:30 -- common/autotest_common.sh@1217 -- # return 0 00:07:32.196 12:35:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.196 12:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.196 12:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:32.196 12:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.196 12:35:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.196 12:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.196 12:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:32.196 12:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.196 12:35:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:32.196 12:35:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.196 12:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.196 12:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:32.196 12:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.196 12:35:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.196 12:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.196 12:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:32.196 [2024-04-16 12:35:30.818022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.196 12:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.196 12:35:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:32.196 12:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.196 12:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:32.196 12:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.196 12:35:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.196 12:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.196 12:35:30 -- common/autotest_common.sh@10 -- # set +x 00:07:32.196 12:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.196 12:35:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.453 12:35:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.453 12:35:31 -- common/autotest_common.sh@1184 -- # local i=0 00:07:32.453 12:35:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.453 12:35:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:32.453 12:35:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:34.976 12:35:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:34.976 12:35:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:34.976 12:35:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.976 12:35:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:34.976 12:35:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.976 12:35:33 -- common/autotest_common.sh@1194 -- # return 0 00:07:34.977 12:35:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.977 12:35:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.977 12:35:33 -- common/autotest_common.sh@1205 -- # local i=0 00:07:34.977 12:35:33 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:34.977 12:35:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.977 12:35:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:34.977 12:35:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.977 12:35:33 -- common/autotest_common.sh@1217 -- # return 0 00:07:34.977 12:35:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.977 12:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.977 12:35:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 12:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.977 12:35:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.977 12:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.977 12:35:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 12:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.977 12:35:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:34.977 12:35:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.977 12:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.977 12:35:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 12:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.977 12:35:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.977 12:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.977 12:35:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 [2024-04-16 12:35:33.586726] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.977 12:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.977 12:35:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:34.977 12:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.977 12:35:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 12:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.977 12:35:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.977 12:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.977 12:35:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 12:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.977 12:35:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.234 12:35:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.235 12:35:34 -- common/autotest_common.sh@1184 -- # local i=0 00:07:35.235 12:35:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.235 12:35:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:35.235 12:35:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:37.762 12:35:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:37.762 12:35:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:37.762 12:35:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.762 12:35:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:37.762 12:35:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.762 12:35:36 -- common/autotest_common.sh@1194 -- # return 0 00:07:37.762 12:35:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:37.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.762 12:35:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:37.762 12:35:36 -- common/autotest_common.sh@1205 -- # local i=0 00:07:37.762 12:35:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:37.762 12:35:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.762 12:35:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:37.762 12:35:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.762 12:35:36 -- common/autotest_common.sh@1217 -- # return 0 00:07:37.762 12:35:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.762 12:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.762 12:35:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 12:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.762 12:35:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.762 12:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.762 12:35:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 12:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.762 12:35:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:37.762 12:35:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:37.762 12:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.762 12:35:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 12:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.762 12:35:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.762 12:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.762 12:35:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 [2024-04-16 12:35:36.410768] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.762 12:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.762 12:35:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:37.762 12:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.762 12:35:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 12:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.762 12:35:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:37.762 12:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.762 12:35:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.762 12:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.762 12:35:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.020 12:35:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.020 12:35:37 -- common/autotest_common.sh@1184 -- # local i=0 00:07:38.020 12:35:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.020 12:35:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:38.020 12:35:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:40.546 12:35:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:40.546 12:35:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:40.546 12:35:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.546 12:35:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:40.546 12:35:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.546 12:35:39 -- common/autotest_common.sh@1194 -- # return 0 00:07:40.546 12:35:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:40.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.546 12:35:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:40.546 12:35:39 -- common/autotest_common.sh@1205 -- # local i=0 00:07:40.546 12:35:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:40.546 12:35:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.546 12:35:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:40.546 12:35:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.546 12:35:39 -- common/autotest_common.sh@1217 -- # return 0 00:07:40.546 12:35:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@99 -- # seq 1 5 00:07:40.547 12:35:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:40.547 12:35:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 [2024-04-16 12:35:39.147483] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:40.547 12:35:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 [2024-04-16 12:35:39.195559] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:40.547 12:35:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 [2024-04-16 12:35:39.243728] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:40.547 12:35:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 [2024-04-16 12:35:39.291887] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:40.547 12:35:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 [2024-04-16 12:35:39.340059] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:40.547 12:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.547 12:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.547 12:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.547 12:35:39 -- target/rpc.sh@110 -- # stats='{ 00:07:40.547 "tick_rate": 2700000000, 00:07:40.547 "poll_groups": [ 00:07:40.547 { 00:07:40.547 "name": "nvmf_tgt_poll_group_0", 00:07:40.547 "admin_qpairs": 2, 00:07:40.548 "io_qpairs": 84, 00:07:40.548 "current_admin_qpairs": 0, 00:07:40.548 "current_io_qpairs": 0, 00:07:40.548 "pending_bdev_io": 0, 00:07:40.548 "completed_nvme_io": 184, 00:07:40.548 "transports": [ 00:07:40.548 { 00:07:40.548 "trtype": "TCP" 00:07:40.548 } 00:07:40.548 ] 00:07:40.548 }, 00:07:40.548 { 00:07:40.548 "name": "nvmf_tgt_poll_group_1", 00:07:40.548 "admin_qpairs": 2, 00:07:40.548 "io_qpairs": 84, 00:07:40.548 "current_admin_qpairs": 0, 00:07:40.548 "current_io_qpairs": 0, 00:07:40.548 "pending_bdev_io": 0, 00:07:40.548 "completed_nvme_io": 134, 00:07:40.548 "transports": [ 00:07:40.548 { 00:07:40.548 "trtype": "TCP" 00:07:40.548 } 00:07:40.548 ] 00:07:40.548 }, 00:07:40.548 { 00:07:40.548 "name": "nvmf_tgt_poll_group_2", 00:07:40.548 "admin_qpairs": 1, 00:07:40.548 "io_qpairs": 84, 00:07:40.548 "current_admin_qpairs": 0, 00:07:40.548 "current_io_qpairs": 0, 00:07:40.548 "pending_bdev_io": 0, 00:07:40.548 "completed_nvme_io": 186, 00:07:40.548 "transports": [ 00:07:40.548 { 00:07:40.548 "trtype": "TCP" 00:07:40.548 } 00:07:40.548 ] 00:07:40.548 }, 00:07:40.548 { 00:07:40.548 "name": "nvmf_tgt_poll_group_3", 00:07:40.548 "admin_qpairs": 2, 00:07:40.548 "io_qpairs": 84, 00:07:40.548 "current_admin_qpairs": 0, 00:07:40.548 "current_io_qpairs": 0, 00:07:40.548 "pending_bdev_io": 0, 00:07:40.548 "completed_nvme_io": 182, 00:07:40.548 "transports": [ 00:07:40.548 { 00:07:40.548 "trtype": "TCP" 00:07:40.548 } 00:07:40.548 ] 00:07:40.548 } 00:07:40.548 ] 00:07:40.548 }' 00:07:40.548 12:35:39 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:40.548 12:35:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:40.548 12:35:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:40.548 12:35:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:40.548 12:35:39 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:40.548 12:35:39 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:40.548 12:35:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:40.548 12:35:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:40.548 12:35:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:40.548 12:35:39 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:40.548 12:35:39 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:40.548 12:35:39 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:40.548 12:35:39 -- target/rpc.sh@123 -- # nvmftestfini 00:07:40.548 12:35:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:40.548 12:35:39 -- nvmf/common.sh@117 -- # sync 00:07:40.548 12:35:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.548 12:35:39 -- nvmf/common.sh@120 -- # set +e 00:07:40.548 12:35:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.548 12:35:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.548 rmmod nvme_tcp 00:07:40.548 rmmod nvme_fabrics 00:07:40.548 rmmod nvme_keyring 00:07:40.548 12:35:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.548 12:35:39 -- nvmf/common.sh@124 -- # set -e 00:07:40.548 12:35:39 -- nvmf/common.sh@125 -- # return 0 00:07:40.548 12:35:39 -- nvmf/common.sh@478 -- # '[' -n 1092494 ']' 00:07:40.548 12:35:39 -- nvmf/common.sh@479 -- # killprocess 1092494 00:07:40.548 12:35:39 -- common/autotest_common.sh@936 -- # '[' -z 1092494 ']' 00:07:40.548 12:35:39 -- common/autotest_common.sh@940 -- # kill -0 1092494 00:07:40.548 12:35:39 -- common/autotest_common.sh@941 -- # uname 00:07:40.548 12:35:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.548 12:35:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1092494 00:07:40.548 12:35:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:40.548 12:35:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:40.548 12:35:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1092494' 00:07:40.548 killing process with pid 1092494 00:07:40.548 12:35:39 -- common/autotest_common.sh@955 -- # kill 1092494 00:07:40.548 12:35:39 -- common/autotest_common.sh@960 -- # wait 1092494 00:07:40.807 12:35:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:40.807 12:35:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:40.807 12:35:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:40.807 12:35:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.807 12:35:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.807 12:35:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.807 12:35:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.807 12:35:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.343 12:35:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.343 00:07:43.343 real 0m25.815s 00:07:43.343 user 1m22.056s 00:07:43.343 sys 0m4.520s 00:07:43.343 12:35:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.343 12:35:41 -- common/autotest_common.sh@10 -- # set +x 00:07:43.343 ************************************ 00:07:43.343 END TEST nvmf_rpc 00:07:43.343 ************************************ 00:07:43.343 12:35:41 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:43.343 12:35:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:43.343 12:35:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.343 12:35:41 -- common/autotest_common.sh@10 -- # set +x 00:07:43.343 ************************************ 00:07:43.343 START TEST nvmf_invalid 00:07:43.343 ************************************ 00:07:43.343 12:35:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:43.343 * Looking for test storage... 00:07:43.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.343 12:35:42 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.343 12:35:42 -- nvmf/common.sh@7 -- # uname -s 00:07:43.343 12:35:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.343 12:35:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.343 12:35:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.343 12:35:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.343 12:35:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.343 12:35:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.343 12:35:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.343 12:35:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.343 12:35:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.343 12:35:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.343 12:35:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:43.343 12:35:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:43.343 12:35:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.343 12:35:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.343 12:35:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.343 12:35:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.343 12:35:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.343 12:35:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.343 12:35:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.343 12:35:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.344 12:35:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.344 12:35:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.344 12:35:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.344 12:35:42 -- paths/export.sh@5 -- # export PATH 00:07:43.344 12:35:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.344 12:35:42 -- nvmf/common.sh@47 -- # : 0 00:07:43.344 12:35:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.344 12:35:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.344 12:35:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.344 12:35:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.344 12:35:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.344 12:35:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.344 12:35:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.344 12:35:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.344 12:35:42 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:43.344 12:35:42 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.344 12:35:42 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:43.344 12:35:42 -- target/invalid.sh@14 -- # target=foobar 00:07:43.344 12:35:42 -- target/invalid.sh@16 -- # RANDOM=0 00:07:43.344 12:35:42 -- target/invalid.sh@34 -- # nvmftestinit 00:07:43.344 12:35:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:43.344 12:35:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.344 12:35:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:43.344 12:35:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:43.344 12:35:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:43.344 12:35:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.344 12:35:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.344 12:35:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.344 12:35:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:43.344 12:35:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:43.344 12:35:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.344 12:35:42 -- common/autotest_common.sh@10 -- # set +x 00:07:45.878 12:35:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:45.878 12:35:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.878 12:35:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.878 12:35:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.878 12:35:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.878 12:35:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.878 12:35:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.878 12:35:44 -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.878 12:35:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.878 12:35:44 -- nvmf/common.sh@296 -- # e810=() 00:07:45.878 12:35:44 -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.878 12:35:44 -- nvmf/common.sh@297 -- # x722=() 00:07:45.878 12:35:44 -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.878 12:35:44 -- nvmf/common.sh@298 -- # mlx=() 00:07:45.878 12:35:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.878 12:35:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.878 12:35:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.878 12:35:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.878 12:35:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.878 12:35:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.878 12:35:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:45.878 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:45.878 12:35:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.878 12:35:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:45.878 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:45.878 12:35:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.878 12:35:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.878 12:35:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.878 12:35:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:45.878 12:35:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.878 12:35:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:45.878 Found net devices under 0000:82:00.0: cvl_0_0 00:07:45.878 12:35:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.878 12:35:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.878 12:35:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.878 12:35:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:45.878 12:35:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.878 12:35:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:45.878 Found net devices under 0000:82:00.1: cvl_0_1 00:07:45.878 12:35:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.878 12:35:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:45.878 12:35:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:45.878 12:35:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:45.878 12:35:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.878 12:35:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.878 12:35:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.878 12:35:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.878 12:35:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.878 12:35:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.878 12:35:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.878 12:35:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.878 12:35:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.878 12:35:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.878 12:35:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.878 12:35:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.878 12:35:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.878 12:35:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.878 12:35:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.878 12:35:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.878 12:35:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.878 12:35:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.878 12:35:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.878 12:35:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:45.878 00:07:45.878 --- 10.0.0.2 ping statistics --- 00:07:45.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.878 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:45.878 12:35:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:07:45.878 00:07:45.878 --- 10.0.0.1 ping statistics --- 00:07:45.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.878 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:07:45.878 12:35:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.878 12:35:44 -- nvmf/common.sh@411 -- # return 0 00:07:45.878 12:35:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:45.878 12:35:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.878 12:35:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:45.878 12:35:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.878 12:35:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:45.878 12:35:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:45.878 12:35:44 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:45.878 12:35:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:45.878 12:35:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:45.878 12:35:44 -- common/autotest_common.sh@10 -- # set +x 00:07:45.878 12:35:44 -- nvmf/common.sh@470 -- # nvmfpid=1097300 00:07:45.878 12:35:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.878 12:35:44 -- nvmf/common.sh@471 -- # waitforlisten 1097300 00:07:45.878 12:35:44 -- common/autotest_common.sh@817 -- # '[' -z 1097300 ']' 00:07:45.878 12:35:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.878 12:35:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:45.878 12:35:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.878 12:35:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:45.878 12:35:44 -- common/autotest_common.sh@10 -- # set +x 00:07:45.878 [2024-04-16 12:35:44.831017] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:07:45.879 [2024-04-16 12:35:44.831091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.879 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.879 [2024-04-16 12:35:44.911972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.136 [2024-04-16 12:35:45.031587] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.136 [2024-04-16 12:35:45.031647] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.136 [2024-04-16 12:35:45.031664] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.136 [2024-04-16 12:35:45.031677] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.136 [2024-04-16 12:35:45.031690] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.136 [2024-04-16 12:35:45.031754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.136 [2024-04-16 12:35:45.031784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.136 [2024-04-16 12:35:45.031843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.136 [2024-04-16 12:35:45.031847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.078 12:35:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:47.078 12:35:45 -- common/autotest_common.sh@850 -- # return 0 00:07:47.078 12:35:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:47.078 12:35:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:47.078 12:35:45 -- common/autotest_common.sh@10 -- # set +x 00:07:47.078 12:35:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.078 12:35:45 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:47.078 12:35:45 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27104 00:07:47.078 [2024-04-16 12:35:46.036251] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:47.078 12:35:46 -- target/invalid.sh@40 -- # out='request: 00:07:47.078 { 00:07:47.078 "nqn": "nqn.2016-06.io.spdk:cnode27104", 00:07:47.078 "tgt_name": "foobar", 00:07:47.078 "method": "nvmf_create_subsystem", 00:07:47.078 "req_id": 1 00:07:47.078 } 00:07:47.078 Got JSON-RPC error response 00:07:47.078 response: 00:07:47.078 { 00:07:47.078 "code": -32603, 00:07:47.078 "message": "Unable to find target foobar" 00:07:47.078 }' 00:07:47.078 12:35:46 -- target/invalid.sh@41 -- # [[ request: 00:07:47.078 { 00:07:47.078 "nqn": "nqn.2016-06.io.spdk:cnode27104", 00:07:47.078 "tgt_name": "foobar", 00:07:47.078 "method": "nvmf_create_subsystem", 00:07:47.078 "req_id": 1 00:07:47.078 } 00:07:47.078 Got JSON-RPC error response 00:07:47.078 response: 00:07:47.078 { 00:07:47.078 "code": -32603, 00:07:47.078 "message": "Unable to find target foobar" 00:07:47.078 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:47.078 12:35:46 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:47.078 12:35:46 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2065 00:07:47.360 [2024-04-16 12:35:46.297189] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2065: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:47.360 12:35:46 -- target/invalid.sh@45 -- # out='request: 00:07:47.360 { 00:07:47.360 "nqn": "nqn.2016-06.io.spdk:cnode2065", 00:07:47.360 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:47.360 "method": "nvmf_create_subsystem", 00:07:47.360 "req_id": 1 00:07:47.360 } 00:07:47.360 Got JSON-RPC error response 00:07:47.360 response: 00:07:47.360 { 00:07:47.360 "code": -32602, 00:07:47.360 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:47.360 }' 00:07:47.360 12:35:46 -- target/invalid.sh@46 -- # [[ request: 00:07:47.360 { 00:07:47.360 "nqn": "nqn.2016-06.io.spdk:cnode2065", 00:07:47.360 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:47.360 "method": "nvmf_create_subsystem", 00:07:47.360 "req_id": 1 00:07:47.360 } 00:07:47.360 Got JSON-RPC error response 00:07:47.360 response: 00:07:47.360 { 00:07:47.360 "code": -32602, 00:07:47.360 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:47.360 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:47.360 12:35:46 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:47.360 12:35:46 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2609 00:07:47.617 [2024-04-16 12:35:46.545972] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2609: invalid model number 'SPDK_Controller' 00:07:47.617 12:35:46 -- target/invalid.sh@50 -- # out='request: 00:07:47.617 { 00:07:47.617 "nqn": "nqn.2016-06.io.spdk:cnode2609", 00:07:47.617 "model_number": "SPDK_Controller\u001f", 00:07:47.617 "method": "nvmf_create_subsystem", 00:07:47.617 "req_id": 1 00:07:47.617 } 00:07:47.617 Got JSON-RPC error response 00:07:47.617 response: 00:07:47.617 { 00:07:47.617 "code": -32602, 00:07:47.617 "message": "Invalid MN SPDK_Controller\u001f" 00:07:47.617 }' 00:07:47.617 12:35:46 -- target/invalid.sh@51 -- # [[ request: 00:07:47.617 { 00:07:47.617 "nqn": "nqn.2016-06.io.spdk:cnode2609", 00:07:47.617 "model_number": "SPDK_Controller\u001f", 00:07:47.617 "method": "nvmf_create_subsystem", 00:07:47.617 "req_id": 1 00:07:47.617 } 00:07:47.617 Got JSON-RPC error response 00:07:47.617 response: 00:07:47.617 { 00:07:47.617 "code": -32602, 00:07:47.617 "message": "Invalid MN SPDK_Controller\u001f" 00:07:47.617 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:47.617 12:35:46 -- target/invalid.sh@54 -- # gen_random_s 21 00:07:47.617 12:35:46 -- target/invalid.sh@19 -- # local length=21 ll 00:07:47.617 12:35:46 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:47.617 12:35:46 -- target/invalid.sh@21 -- # local chars 00:07:47.617 12:35:46 -- target/invalid.sh@22 -- # local string 00:07:47.617 12:35:46 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:47.617 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.617 12:35:46 -- target/invalid.sh@25 -- # printf %x 116 00:07:47.617 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:47.617 12:35:46 -- target/invalid.sh@25 -- # string+=t 00:07:47.617 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.617 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.617 12:35:46 -- target/invalid.sh@25 -- # printf %x 101 00:07:47.617 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=e 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 110 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=n 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 76 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=L 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 69 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=E 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 115 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=s 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 43 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=+ 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 122 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=z 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 57 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=9 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 115 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=s 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 65 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=A 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 120 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=x 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 80 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=P 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 71 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=G 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 76 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=L 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 46 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=. 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 97 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=a 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 32 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=' ' 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 47 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=/ 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 34 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+='"' 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # printf %x 52 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:47.618 12:35:46 -- target/invalid.sh@25 -- # string+=4 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.618 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.618 12:35:46 -- target/invalid.sh@28 -- # [[ t == \- ]] 00:07:47.618 12:35:46 -- target/invalid.sh@31 -- # echo 'tenLEs+z9sAxPGL.a /"4' 00:07:47.618 12:35:46 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'tenLEs+z9sAxPGL.a /"4' nqn.2016-06.io.spdk:cnode4246 00:07:47.876 [2024-04-16 12:35:46.859003] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4246: invalid serial number 'tenLEs+z9sAxPGL.a /"4' 00:07:47.876 12:35:46 -- target/invalid.sh@54 -- # out='request: 00:07:47.876 { 00:07:47.876 "nqn": "nqn.2016-06.io.spdk:cnode4246", 00:07:47.876 "serial_number": "tenLEs+z9sAxPGL.a /\"4", 00:07:47.876 "method": "nvmf_create_subsystem", 00:07:47.876 "req_id": 1 00:07:47.876 } 00:07:47.876 Got JSON-RPC error response 00:07:47.876 response: 00:07:47.876 { 00:07:47.876 "code": -32602, 00:07:47.876 "message": "Invalid SN tenLEs+z9sAxPGL.a /\"4" 00:07:47.876 }' 00:07:47.876 12:35:46 -- target/invalid.sh@55 -- # [[ request: 00:07:47.876 { 00:07:47.876 "nqn": "nqn.2016-06.io.spdk:cnode4246", 00:07:47.876 "serial_number": "tenLEs+z9sAxPGL.a /\"4", 00:07:47.876 "method": "nvmf_create_subsystem", 00:07:47.876 "req_id": 1 00:07:47.876 } 00:07:47.876 Got JSON-RPC error response 00:07:47.876 response: 00:07:47.876 { 00:07:47.876 "code": -32602, 00:07:47.876 "message": "Invalid SN tenLEs+z9sAxPGL.a /\"4" 00:07:47.876 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:47.876 12:35:46 -- target/invalid.sh@58 -- # gen_random_s 41 00:07:47.876 12:35:46 -- target/invalid.sh@19 -- # local length=41 ll 00:07:47.876 12:35:46 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:47.876 12:35:46 -- target/invalid.sh@21 -- # local chars 00:07:47.876 12:35:46 -- target/invalid.sh@22 -- # local string 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 122 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=z 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 118 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=v 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 68 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=D 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 33 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+='!' 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 54 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=6 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 67 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=C 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 97 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=a 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 117 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=u 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 57 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=9 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 76 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=L 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 113 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=q 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 88 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=X 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 124 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+='|' 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 68 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=D 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 104 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=h 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # printf %x 79 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:47.876 12:35:46 -- target/invalid.sh@25 -- # string+=O 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:47.876 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.134 12:35:46 -- target/invalid.sh@25 -- # printf %x 43 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=+ 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 43 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=+ 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 89 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=Y 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 56 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=8 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 90 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=Z 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 96 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+='`' 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 85 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=U 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 75 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=K 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 110 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=n 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 89 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=Y 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 91 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+='[' 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 88 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=X 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 73 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=I 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 32 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=' ' 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 110 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # string+=n 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:46 -- target/invalid.sh@25 -- # printf %x 67 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+=C 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 64 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+=@ 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 65 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+=A 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 76 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+=L 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 92 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+='\' 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 80 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+=P 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 87 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+=W 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 78 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+=N 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 60 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+='<' 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # printf %x 87 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:48.135 12:35:47 -- target/invalid.sh@25 -- # string+=W 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.135 12:35:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.135 12:35:47 -- target/invalid.sh@28 -- # [[ z == \- ]] 00:07:48.135 12:35:47 -- target/invalid.sh@31 -- # echo 'zvD!6Cau9LqX|DhO++Y8Z`UKnY[XI nC@AL\PWN /dev/null' 00:07:50.708 12:35:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.241 12:35:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.241 00:07:53.241 real 0m9.688s 00:07:53.241 user 0m22.334s 00:07:53.241 sys 0m2.840s 00:07:53.241 12:35:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.242 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.242 ************************************ 00:07:53.242 END TEST nvmf_invalid 00:07:53.242 ************************************ 00:07:53.242 12:35:51 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:53.242 12:35:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.242 12:35:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.242 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.242 ************************************ 00:07:53.242 START TEST nvmf_abort 00:07:53.242 ************************************ 00:07:53.242 12:35:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:53.242 * Looking for test storage... 00:07:53.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.242 12:35:51 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.242 12:35:51 -- nvmf/common.sh@7 -- # uname -s 00:07:53.242 12:35:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.242 12:35:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.242 12:35:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.242 12:35:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.242 12:35:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.242 12:35:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.242 12:35:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.242 12:35:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.242 12:35:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.242 12:35:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.242 12:35:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:07:53.242 12:35:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:07:53.242 12:35:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.242 12:35:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.242 12:35:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.242 12:35:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.242 12:35:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.242 12:35:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.242 12:35:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.242 12:35:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.242 12:35:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.242 12:35:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.242 12:35:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.242 12:35:51 -- paths/export.sh@5 -- # export PATH 00:07:53.242 12:35:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.242 12:35:51 -- nvmf/common.sh@47 -- # : 0 00:07:53.242 12:35:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.242 12:35:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.242 12:35:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.242 12:35:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.242 12:35:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.242 12:35:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.242 12:35:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.242 12:35:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.242 12:35:51 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.242 12:35:51 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:53.242 12:35:51 -- target/abort.sh@14 -- # nvmftestinit 00:07:53.242 12:35:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:53.242 12:35:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.242 12:35:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:53.242 12:35:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:53.242 12:35:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:53.242 12:35:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.242 12:35:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.242 12:35:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.242 12:35:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:53.242 12:35:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:53.242 12:35:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.242 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:07:55.776 12:35:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:55.776 12:35:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:55.776 12:35:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:55.776 12:35:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:55.776 12:35:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:55.776 12:35:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:55.776 12:35:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:55.776 12:35:54 -- nvmf/common.sh@295 -- # net_devs=() 00:07:55.776 12:35:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:55.776 12:35:54 -- nvmf/common.sh@296 -- # e810=() 00:07:55.776 12:35:54 -- nvmf/common.sh@296 -- # local -ga e810 00:07:55.776 12:35:54 -- nvmf/common.sh@297 -- # x722=() 00:07:55.776 12:35:54 -- nvmf/common.sh@297 -- # local -ga x722 00:07:55.776 12:35:54 -- nvmf/common.sh@298 -- # mlx=() 00:07:55.776 12:35:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:55.776 12:35:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.776 12:35:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:55.776 12:35:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:55.776 12:35:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:55.776 12:35:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.776 12:35:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:07:55.776 Found 0000:82:00.0 (0x8086 - 0x159b) 00:07:55.776 12:35:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.776 12:35:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:07:55.776 Found 0000:82:00.1 (0x8086 - 0x159b) 00:07:55.776 12:35:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:55.776 12:35:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.776 12:35:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.776 12:35:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:55.776 12:35:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.776 12:35:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:07:55.776 Found net devices under 0000:82:00.0: cvl_0_0 00:07:55.776 12:35:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.776 12:35:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.776 12:35:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.776 12:35:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:55.776 12:35:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.776 12:35:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:07:55.776 Found net devices under 0000:82:00.1: cvl_0_1 00:07:55.776 12:35:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.776 12:35:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:55.776 12:35:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:55.776 12:35:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:55.776 12:35:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:55.776 12:35:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.776 12:35:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.776 12:35:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.776 12:35:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:55.776 12:35:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.776 12:35:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.776 12:35:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:55.776 12:35:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.776 12:35:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.776 12:35:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:55.776 12:35:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:55.776 12:35:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.776 12:35:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.776 12:35:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.776 12:35:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.776 12:35:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:55.776 12:35:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.776 12:35:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.776 12:35:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.776 12:35:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:55.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:07:55.776 00:07:55.776 --- 10.0.0.2 ping statistics --- 00:07:55.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.776 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:55.776 12:35:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:07:55.776 00:07:55.776 --- 10.0.0.1 ping statistics --- 00:07:55.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.776 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:07:55.776 12:35:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.776 12:35:54 -- nvmf/common.sh@411 -- # return 0 00:07:55.776 12:35:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:55.776 12:35:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.777 12:35:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:55.777 12:35:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:55.777 12:35:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.777 12:35:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:55.777 12:35:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:55.777 12:35:54 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:55.777 12:35:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:55.777 12:35:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:55.777 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:07:55.777 12:35:54 -- nvmf/common.sh@470 -- # nvmfpid=1100361 00:07:55.777 12:35:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:55.777 12:35:54 -- nvmf/common.sh@471 -- # waitforlisten 1100361 00:07:55.777 12:35:54 -- common/autotest_common.sh@817 -- # '[' -z 1100361 ']' 00:07:55.777 12:35:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.777 12:35:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:55.777 12:35:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.777 12:35:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:55.777 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:07:55.777 [2024-04-16 12:35:54.754901] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:07:55.777 [2024-04-16 12:35:54.754985] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.777 [2024-04-16 12:35:54.827789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.035 [2024-04-16 12:35:54.942697] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.035 [2024-04-16 12:35:54.942768] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.035 [2024-04-16 12:35:54.942796] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.035 [2024-04-16 12:35:54.942810] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.035 [2024-04-16 12:35:54.942830] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.035 [2024-04-16 12:35:54.942922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.035 [2024-04-16 12:35:54.942976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.035 [2024-04-16 12:35:54.942980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.969 12:35:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:56.969 12:35:55 -- common/autotest_common.sh@850 -- # return 0 00:07:56.969 12:35:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:56.969 12:35:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:56.969 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.969 12:35:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.969 12:35:55 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:56.969 12:35:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.969 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.969 [2024-04-16 12:35:55.744530] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.969 12:35:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.969 12:35:55 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:56.969 12:35:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.969 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.969 Malloc0 00:07:56.969 12:35:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.969 12:35:55 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:56.969 12:35:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.969 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.969 Delay0 00:07:56.969 12:35:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.969 12:35:55 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.969 12:35:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.969 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.969 12:35:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.969 12:35:55 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:56.969 12:35:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.969 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.969 12:35:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.969 12:35:55 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.969 12:35:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.969 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.969 [2024-04-16 12:35:55.824800] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.969 12:35:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.969 12:35:55 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.969 12:35:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.969 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.969 12:35:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.969 12:35:55 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:56.969 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.969 [2024-04-16 12:35:55.971760] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:59.499 Initializing NVMe Controllers 00:07:59.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:59.499 controller IO queue size 128 less than required 00:07:59.499 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:59.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:59.499 Initialization complete. Launching workers. 00:07:59.499 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33574 00:07:59.499 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33635, failed to submit 62 00:07:59.499 success 33578, unsuccess 57, failed 0 00:07:59.499 12:35:58 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.499 12:35:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.499 12:35:58 -- common/autotest_common.sh@10 -- # set +x 00:07:59.499 12:35:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.499 12:35:58 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:59.499 12:35:58 -- target/abort.sh@38 -- # nvmftestfini 00:07:59.499 12:35:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:59.499 12:35:58 -- nvmf/common.sh@117 -- # sync 00:07:59.499 12:35:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.499 12:35:58 -- nvmf/common.sh@120 -- # set +e 00:07:59.499 12:35:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.499 12:35:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.499 rmmod nvme_tcp 00:07:59.499 rmmod nvme_fabrics 00:07:59.499 rmmod nvme_keyring 00:07:59.499 12:35:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.499 12:35:58 -- nvmf/common.sh@124 -- # set -e 00:07:59.499 12:35:58 -- nvmf/common.sh@125 -- # return 0 00:07:59.499 12:35:58 -- nvmf/common.sh@478 -- # '[' -n 1100361 ']' 00:07:59.499 12:35:58 -- nvmf/common.sh@479 -- # killprocess 1100361 00:07:59.499 12:35:58 -- common/autotest_common.sh@936 -- # '[' -z 1100361 ']' 00:07:59.499 12:35:58 -- common/autotest_common.sh@940 -- # kill -0 1100361 00:07:59.499 12:35:58 -- common/autotest_common.sh@941 -- # uname 00:07:59.499 12:35:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.499 12:35:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1100361 00:07:59.499 12:35:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:59.499 12:35:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:59.499 12:35:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1100361' 00:07:59.499 killing process with pid 1100361 00:07:59.499 12:35:58 -- common/autotest_common.sh@955 -- # kill 1100361 00:07:59.499 12:35:58 -- common/autotest_common.sh@960 -- # wait 1100361 00:07:59.499 12:35:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:59.499 12:35:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:59.499 12:35:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:59.499 12:35:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.499 12:35:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.499 12:35:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.499 12:35:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.499 12:35:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.034 12:36:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.034 00:08:02.034 real 0m8.662s 00:08:02.034 user 0m13.061s 00:08:02.034 sys 0m3.093s 00:08:02.034 12:36:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.034 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.034 ************************************ 00:08:02.034 END TEST nvmf_abort 00:08:02.034 ************************************ 00:08:02.034 12:36:00 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:02.034 12:36:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:02.034 12:36:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.034 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:02.034 ************************************ 00:08:02.034 START TEST nvmf_ns_hotplug_stress 00:08:02.034 ************************************ 00:08:02.034 12:36:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:02.034 * Looking for test storage... 00:08:02.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.034 12:36:00 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.034 12:36:00 -- nvmf/common.sh@7 -- # uname -s 00:08:02.034 12:36:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.034 12:36:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.034 12:36:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.034 12:36:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.034 12:36:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.034 12:36:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.034 12:36:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.034 12:36:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.034 12:36:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.034 12:36:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.034 12:36:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:02.034 12:36:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:02.034 12:36:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.034 12:36:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.034 12:36:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.034 12:36:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.034 12:36:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.034 12:36:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.034 12:36:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.034 12:36:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.034 12:36:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.034 12:36:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.034 12:36:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.034 12:36:00 -- paths/export.sh@5 -- # export PATH 00:08:02.034 12:36:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.034 12:36:00 -- nvmf/common.sh@47 -- # : 0 00:08:02.034 12:36:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.034 12:36:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.034 12:36:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.034 12:36:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.034 12:36:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.034 12:36:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.034 12:36:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.034 12:36:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.034 12:36:00 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.034 12:36:00 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:08:02.034 12:36:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:02.034 12:36:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.034 12:36:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:02.034 12:36:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:02.034 12:36:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:02.034 12:36:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.034 12:36:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.034 12:36:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.034 12:36:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:02.034 12:36:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:02.034 12:36:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.034 12:36:00 -- common/autotest_common.sh@10 -- # set +x 00:08:04.566 12:36:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:04.566 12:36:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.566 12:36:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.566 12:36:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.566 12:36:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.566 12:36:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.566 12:36:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.566 12:36:03 -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.566 12:36:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.566 12:36:03 -- nvmf/common.sh@296 -- # e810=() 00:08:04.566 12:36:03 -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.566 12:36:03 -- nvmf/common.sh@297 -- # x722=() 00:08:04.566 12:36:03 -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.566 12:36:03 -- nvmf/common.sh@298 -- # mlx=() 00:08:04.566 12:36:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.566 12:36:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.566 12:36:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.566 12:36:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:04.566 12:36:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.566 12:36:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.566 12:36:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:04.566 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:04.566 12:36:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.566 12:36:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:04.566 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:04.566 12:36:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.566 12:36:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.566 12:36:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.566 12:36:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:04.566 12:36:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.566 12:36:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:04.566 Found net devices under 0000:82:00.0: cvl_0_0 00:08:04.566 12:36:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.566 12:36:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.566 12:36:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.566 12:36:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:04.566 12:36:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.566 12:36:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:04.566 Found net devices under 0000:82:00.1: cvl_0_1 00:08:04.566 12:36:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.566 12:36:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:04.566 12:36:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:04.566 12:36:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:04.566 12:36:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.566 12:36:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.566 12:36:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.566 12:36:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:04.566 12:36:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.566 12:36:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.566 12:36:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:04.566 12:36:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.566 12:36:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.566 12:36:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:04.566 12:36:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:04.566 12:36:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.566 12:36:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.566 12:36:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.566 12:36:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.566 12:36:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.566 12:36:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.566 12:36:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.566 12:36:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.566 12:36:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:08:04.566 00:08:04.566 --- 10.0.0.2 ping statistics --- 00:08:04.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.566 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:04.566 12:36:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:08:04.566 00:08:04.566 --- 10.0.0.1 ping statistics --- 00:08:04.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.566 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:04.566 12:36:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.566 12:36:03 -- nvmf/common.sh@411 -- # return 0 00:08:04.566 12:36:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:04.566 12:36:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.566 12:36:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:04.566 12:36:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.567 12:36:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:04.567 12:36:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:04.567 12:36:03 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:08:04.567 12:36:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:04.567 12:36:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:04.567 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 12:36:03 -- nvmf/common.sh@470 -- # nvmfpid=1103245 00:08:04.567 12:36:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:04.567 12:36:03 -- nvmf/common.sh@471 -- # waitforlisten 1103245 00:08:04.567 12:36:03 -- common/autotest_common.sh@817 -- # '[' -z 1103245 ']' 00:08:04.567 12:36:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.567 12:36:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:04.567 12:36:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.567 12:36:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:04.567 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 [2024-04-16 12:36:03.349675] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:08:04.567 [2024-04-16 12:36:03.349754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.567 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.567 [2024-04-16 12:36:03.431884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.567 [2024-04-16 12:36:03.539676] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.567 [2024-04-16 12:36:03.539734] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.567 [2024-04-16 12:36:03.539749] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.567 [2024-04-16 12:36:03.539760] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.567 [2024-04-16 12:36:03.539771] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.567 [2024-04-16 12:36:03.539857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.567 [2024-04-16 12:36:03.539902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.567 [2024-04-16 12:36:03.539905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.500 12:36:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:05.500 12:36:04 -- common/autotest_common.sh@850 -- # return 0 00:08:05.500 12:36:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:05.500 12:36:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:05.500 12:36:04 -- common/autotest_common.sh@10 -- # set +x 00:08:05.500 12:36:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.500 12:36:04 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:08:05.500 12:36:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:05.757 [2024-04-16 12:36:04.619326] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.757 12:36:04 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:06.015 12:36:04 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.272 [2024-04-16 12:36:05.154138] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.272 12:36:05 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.530 12:36:05 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:06.788 Malloc0 00:08:06.788 12:36:05 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:07.045 Delay0 00:08:07.045 12:36:05 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.301 12:36:06 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:07.558 NULL1 00:08:07.558 12:36:06 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:07.815 12:36:06 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1103672 00:08:07.815 12:36:06 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:07.815 12:36:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:07.816 12:36:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.816 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.074 12:36:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.363 12:36:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:08:08.363 12:36:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:08.620 true 00:08:08.620 12:36:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:08.620 12:36:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.878 12:36:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.136 12:36:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:08:09.136 12:36:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:09.393 true 00:08:09.393 12:36:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:09.393 12:36:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.651 12:36:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.909 12:36:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:08:09.909 12:36:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:09.909 true 00:08:10.167 12:36:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:10.167 12:36:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.100 Read completed with error (sct=0, sc=11) 00:08:11.100 12:36:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.358 12:36:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:08:11.358 12:36:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:11.615 true 00:08:11.615 12:36:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:11.615 12:36:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.873 12:36:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.130 12:36:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:08:12.130 12:36:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:12.388 true 00:08:12.388 12:36:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:12.388 12:36:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.320 12:36:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.577 12:36:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:08:13.577 12:36:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:13.834 true 00:08:13.834 12:36:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:13.834 12:36:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.092 12:36:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.350 12:36:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:08:14.350 12:36:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:14.607 true 00:08:14.607 12:36:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:14.607 12:36:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.540 12:36:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.540 12:36:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:08:15.540 12:36:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:15.797 true 00:08:15.797 12:36:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:15.797 12:36:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.054 12:36:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.312 12:36:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:08:16.312 12:36:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:16.569 true 00:08:16.569 12:36:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:16.569 12:36:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.502 12:36:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.760 12:36:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:08:17.760 12:36:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:18.018 true 00:08:18.018 12:36:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:18.018 12:36:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.275 12:36:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.532 12:36:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:08:18.532 12:36:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:18.789 true 00:08:18.789 12:36:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:18.789 12:36:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.721 12:36:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.978 12:36:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:08:19.978 12:36:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:20.235 true 00:08:20.235 12:36:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:20.235 12:36:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.492 12:36:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.750 12:36:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:08:20.750 12:36:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:20.750 true 00:08:21.008 12:36:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:21.008 12:36:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.941 12:36:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.199 12:36:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:08:22.199 12:36:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:22.456 true 00:08:22.456 12:36:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:22.456 12:36:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.712 12:36:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.970 12:36:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:08:22.970 12:36:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:23.227 true 00:08:23.227 12:36:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:23.227 12:36:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.791 12:36:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.049 12:36:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:08:24.049 12:36:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:24.306 true 00:08:24.306 12:36:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:24.306 12:36:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.564 12:36:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.820 12:36:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:08:24.820 12:36:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:25.077 true 00:08:25.077 12:36:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:25.077 12:36:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.008 12:36:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.265 12:36:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:08:26.265 12:36:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:26.520 true 00:08:26.520 12:36:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:26.520 12:36:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.777 12:36:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.032 12:36:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:08:27.033 12:36:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:27.289 true 00:08:27.289 12:36:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:27.289 12:36:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.219 12:36:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.474 12:36:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:08:28.474 12:36:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:28.732 true 00:08:28.732 12:36:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:28.732 12:36:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.989 12:36:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.989 12:36:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:08:28.989 12:36:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:29.254 true 00:08:29.254 12:36:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:29.254 12:36:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.623 12:36:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.623 12:36:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:08:30.623 12:36:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:30.881 true 00:08:30.881 12:36:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:30.881 12:36:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.140 12:36:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.397 12:36:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:31.397 12:36:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:31.655 true 00:08:31.655 12:36:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:31.655 12:36:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.587 12:36:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.587 12:36:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:32.587 12:36:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:32.844 true 00:08:32.844 12:36:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:32.844 12:36:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.101 12:36:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.358 12:36:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:33.358 12:36:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:33.615 true 00:08:33.615 12:36:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:33.615 12:36:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.546 12:36:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.802 12:36:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:34.802 12:36:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:35.059 true 00:08:35.059 12:36:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:35.059 12:36:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.316 12:36:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.574 12:36:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:08:35.574 12:36:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:35.831 true 00:08:35.831 12:36:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:35.831 12:36:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.764 12:36:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.765 12:36:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:08:36.765 12:36:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:37.328 true 00:08:37.328 12:36:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:37.328 12:36:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.329 12:36:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.586 12:36:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:08:37.586 12:36:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:37.844 true 00:08:37.844 12:36:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:37.844 12:36:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.777 Initializing NVMe Controllers 00:08:38.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:38.777 Controller IO queue size 128, less than required. 00:08:38.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:38.777 Controller IO queue size 128, less than required. 00:08:38.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:38.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:38.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:38.777 Initialization complete. Launching workers. 00:08:38.777 ======================================================== 00:08:38.777 Latency(us) 00:08:38.777 Device Information : IOPS MiB/s Average min max 00:08:38.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 600.73 0.29 109983.73 2852.02 1076516.14 00:08:38.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10324.43 5.04 12360.70 2751.41 447518.81 00:08:38.777 ======================================================== 00:08:38.777 Total : 10925.17 5.33 17728.62 2751.41 1076516.14 00:08:38.777 00:08:38.777 12:36:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.035 12:36:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:08:39.035 12:36:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:39.292 true 00:08:39.292 12:36:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1103672 00:08:39.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1103672) - No such process 00:08:39.292 12:36:38 -- target/ns_hotplug_stress.sh@44 -- # wait 1103672 00:08:39.292 12:36:38 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:08:39.292 12:36:38 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:08:39.292 12:36:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:39.292 12:36:38 -- nvmf/common.sh@117 -- # sync 00:08:39.292 12:36:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.292 12:36:38 -- nvmf/common.sh@120 -- # set +e 00:08:39.292 12:36:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.292 12:36:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.292 rmmod nvme_tcp 00:08:39.292 rmmod nvme_fabrics 00:08:39.292 rmmod nvme_keyring 00:08:39.292 12:36:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.292 12:36:38 -- nvmf/common.sh@124 -- # set -e 00:08:39.292 12:36:38 -- nvmf/common.sh@125 -- # return 0 00:08:39.292 12:36:38 -- nvmf/common.sh@478 -- # '[' -n 1103245 ']' 00:08:39.292 12:36:38 -- nvmf/common.sh@479 -- # killprocess 1103245 00:08:39.292 12:36:38 -- common/autotest_common.sh@936 -- # '[' -z 1103245 ']' 00:08:39.292 12:36:38 -- common/autotest_common.sh@940 -- # kill -0 1103245 00:08:39.292 12:36:38 -- common/autotest_common.sh@941 -- # uname 00:08:39.293 12:36:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.293 12:36:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1103245 00:08:39.293 12:36:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:39.293 12:36:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:39.293 12:36:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1103245' 00:08:39.293 killing process with pid 1103245 00:08:39.293 12:36:38 -- common/autotest_common.sh@955 -- # kill 1103245 00:08:39.293 12:36:38 -- common/autotest_common.sh@960 -- # wait 1103245 00:08:39.859 12:36:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:39.859 12:36:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:39.859 12:36:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:39.859 12:36:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.859 12:36:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:39.859 12:36:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.859 12:36:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.859 12:36:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.762 12:36:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:41.762 00:08:41.762 real 0m40.065s 00:08:41.762 user 2m33.788s 00:08:41.762 sys 0m11.112s 00:08:41.762 12:36:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:41.762 12:36:40 -- common/autotest_common.sh@10 -- # set +x 00:08:41.762 ************************************ 00:08:41.762 END TEST nvmf_ns_hotplug_stress 00:08:41.762 ************************************ 00:08:41.762 12:36:40 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:41.762 12:36:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:41.762 12:36:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.762 12:36:40 -- common/autotest_common.sh@10 -- # set +x 00:08:41.762 ************************************ 00:08:41.762 START TEST nvmf_connect_stress 00:08:41.762 ************************************ 00:08:41.762 12:36:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:42.021 * Looking for test storage... 00:08:42.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.021 12:36:40 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.021 12:36:40 -- nvmf/common.sh@7 -- # uname -s 00:08:42.021 12:36:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.021 12:36:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.021 12:36:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.021 12:36:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.021 12:36:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.021 12:36:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.021 12:36:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.021 12:36:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.021 12:36:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.021 12:36:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.021 12:36:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:42.021 12:36:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:42.021 12:36:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.021 12:36:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.021 12:36:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.021 12:36:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.021 12:36:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.021 12:36:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.021 12:36:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.021 12:36:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.021 12:36:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.021 12:36:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.021 12:36:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.021 12:36:40 -- paths/export.sh@5 -- # export PATH 00:08:42.021 12:36:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.021 12:36:40 -- nvmf/common.sh@47 -- # : 0 00:08:42.021 12:36:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.021 12:36:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.021 12:36:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.021 12:36:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.021 12:36:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.021 12:36:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.021 12:36:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.021 12:36:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.021 12:36:40 -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:42.021 12:36:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:42.021 12:36:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.021 12:36:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:42.021 12:36:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:42.021 12:36:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:42.021 12:36:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.021 12:36:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.021 12:36:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.021 12:36:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:42.021 12:36:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:42.021 12:36:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.021 12:36:40 -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 12:36:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:44.579 12:36:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.579 12:36:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.579 12:36:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.579 12:36:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.579 12:36:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.579 12:36:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.579 12:36:43 -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.579 12:36:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.579 12:36:43 -- nvmf/common.sh@296 -- # e810=() 00:08:44.579 12:36:43 -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.579 12:36:43 -- nvmf/common.sh@297 -- # x722=() 00:08:44.579 12:36:43 -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.579 12:36:43 -- nvmf/common.sh@298 -- # mlx=() 00:08:44.579 12:36:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.579 12:36:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.580 12:36:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.580 12:36:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:44.580 12:36:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.580 12:36:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.580 12:36:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:08:44.580 Found 0000:82:00.0 (0x8086 - 0x159b) 00:08:44.580 12:36:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.580 12:36:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:08:44.580 Found 0000:82:00.1 (0x8086 - 0x159b) 00:08:44.580 12:36:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.580 12:36:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.580 12:36:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.580 12:36:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:44.580 12:36:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.580 12:36:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:08:44.580 Found net devices under 0000:82:00.0: cvl_0_0 00:08:44.580 12:36:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.580 12:36:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.580 12:36:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.580 12:36:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:44.580 12:36:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.580 12:36:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:08:44.580 Found net devices under 0000:82:00.1: cvl_0_1 00:08:44.580 12:36:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.580 12:36:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:44.580 12:36:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:44.580 12:36:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:44.580 12:36:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.580 12:36:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.580 12:36:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.580 12:36:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:44.580 12:36:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.580 12:36:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.580 12:36:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:44.580 12:36:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.580 12:36:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.580 12:36:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:44.580 12:36:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:44.580 12:36:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.580 12:36:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.580 12:36:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.580 12:36:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.580 12:36:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:44.580 12:36:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.580 12:36:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.580 12:36:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.580 12:36:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:44.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:08:44.580 00:08:44.580 --- 10.0.0.2 ping statistics --- 00:08:44.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.580 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:08:44.580 12:36:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:08:44.580 00:08:44.580 --- 10.0.0.1 ping statistics --- 00:08:44.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.580 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:44.580 12:36:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.580 12:36:43 -- nvmf/common.sh@411 -- # return 0 00:08:44.580 12:36:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:44.580 12:36:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.580 12:36:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:44.580 12:36:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.580 12:36:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:44.580 12:36:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:44.580 12:36:43 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:44.580 12:36:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:44.580 12:36:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:44.580 12:36:43 -- common/autotest_common.sh@10 -- # set +x 00:08:44.580 12:36:43 -- nvmf/common.sh@470 -- # nvmfpid=1110328 00:08:44.580 12:36:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:44.580 12:36:43 -- nvmf/common.sh@471 -- # waitforlisten 1110328 00:08:44.580 12:36:43 -- common/autotest_common.sh@817 -- # '[' -z 1110328 ']' 00:08:44.580 12:36:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.580 12:36:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:44.580 12:36:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.580 12:36:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:44.580 12:36:43 -- common/autotest_common.sh@10 -- # set +x 00:08:44.580 [2024-04-16 12:36:43.585395] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:08:44.580 [2024-04-16 12:36:43.585486] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.580 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.838 [2024-04-16 12:36:43.666008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.838 [2024-04-16 12:36:43.781727] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.838 [2024-04-16 12:36:43.781792] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.838 [2024-04-16 12:36:43.781818] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.838 [2024-04-16 12:36:43.781832] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.838 [2024-04-16 12:36:43.781844] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.838 [2024-04-16 12:36:43.781942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.838 [2024-04-16 12:36:43.781997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.838 [2024-04-16 12:36:43.782000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.770 12:36:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:45.770 12:36:44 -- common/autotest_common.sh@850 -- # return 0 00:08:45.770 12:36:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:45.770 12:36:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:45.770 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 12:36:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.770 12:36:44 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.770 12:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.770 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 [2024-04-16 12:36:44.560412] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.770 12:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.770 12:36:44 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.770 12:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.770 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 12:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.770 12:36:44 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.770 12:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.770 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 [2024-04-16 12:36:44.592723] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.770 12:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.770 12:36:44 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:45.770 12:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.770 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 NULL1 00:08:45.770 12:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.770 12:36:44 -- target/connect_stress.sh@21 -- # PERF_PID=1110413 00:08:45.770 12:36:44 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:45.770 12:36:44 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:45.770 12:36:44 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # seq 1 20 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.770 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.770 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.770 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.770 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.770 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.770 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.770 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.770 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.770 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.771 12:36:44 -- target/connect_stress.sh@28 -- # cat 00:08:45.771 12:36:44 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:45.771 12:36:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.771 12:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.771 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.028 12:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.028 12:36:44 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:46.028 12:36:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.028 12:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.028 12:36:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.285 12:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.285 12:36:45 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:46.285 12:36:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.285 12:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.285 12:36:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.850 12:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.850 12:36:45 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:46.850 12:36:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.850 12:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.850 12:36:45 -- common/autotest_common.sh@10 -- # set +x 00:08:47.108 12:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.108 12:36:45 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:47.108 12:36:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.108 12:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.108 12:36:45 -- common/autotest_common.sh@10 -- # set +x 00:08:47.365 12:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.365 12:36:46 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:47.365 12:36:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.365 12:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.365 12:36:46 -- common/autotest_common.sh@10 -- # set +x 00:08:47.622 12:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.622 12:36:46 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:47.622 12:36:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.622 12:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.622 12:36:46 -- common/autotest_common.sh@10 -- # set +x 00:08:47.879 12:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.879 12:36:46 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:47.879 12:36:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.879 12:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.879 12:36:46 -- common/autotest_common.sh@10 -- # set +x 00:08:48.444 12:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.444 12:36:47 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:48.444 12:36:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.444 12:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.444 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:08:48.747 12:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.747 12:36:47 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:48.747 12:36:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.747 12:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.747 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.004 12:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:49.004 12:36:47 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:49.004 12:36:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.004 12:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:49.004 12:36:47 -- common/autotest_common.sh@10 -- # set +x 00:08:49.262 12:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:49.262 12:36:48 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:49.262 12:36:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.262 12:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:49.262 12:36:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.519 12:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:49.519 12:36:48 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:49.519 12:36:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.519 12:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:49.519 12:36:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.776 12:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:49.776 12:36:48 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:49.776 12:36:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.776 12:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:49.776 12:36:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.342 12:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.342 12:36:49 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:50.342 12:36:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.342 12:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.342 12:36:49 -- common/autotest_common.sh@10 -- # set +x 00:08:50.600 12:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.600 12:36:49 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:50.600 12:36:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.600 12:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.600 12:36:49 -- common/autotest_common.sh@10 -- # set +x 00:08:50.857 12:36:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:50.857 12:36:49 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:50.857 12:36:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.857 12:36:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:50.857 12:36:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.115 12:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.115 12:36:50 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:51.115 12:36:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.115 12:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.115 12:36:50 -- common/autotest_common.sh@10 -- # set +x 00:08:51.372 12:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.372 12:36:50 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:51.372 12:36:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.372 12:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.372 12:36:50 -- common/autotest_common.sh@10 -- # set +x 00:08:51.936 12:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.936 12:36:50 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:51.936 12:36:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.936 12:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.937 12:36:50 -- common/autotest_common.sh@10 -- # set +x 00:08:52.194 12:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.194 12:36:51 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:52.194 12:36:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.194 12:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.194 12:36:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.452 12:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.452 12:36:51 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:52.452 12:36:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.452 12:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.452 12:36:51 -- common/autotest_common.sh@10 -- # set +x 00:08:52.709 12:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.709 12:36:51 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:52.709 12:36:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.709 12:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.709 12:36:51 -- common/autotest_common.sh@10 -- # set +x 00:08:53.274 12:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:53.274 12:36:52 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:53.274 12:36:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.274 12:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:53.274 12:36:52 -- common/autotest_common.sh@10 -- # set +x 00:08:53.531 12:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:53.531 12:36:52 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:53.531 12:36:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.532 12:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:53.532 12:36:52 -- common/autotest_common.sh@10 -- # set +x 00:08:53.789 12:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:53.789 12:36:52 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:53.789 12:36:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.789 12:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:53.789 12:36:52 -- common/autotest_common.sh@10 -- # set +x 00:08:54.047 12:36:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.048 12:36:53 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:54.048 12:36:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.048 12:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.048 12:36:53 -- common/autotest_common.sh@10 -- # set +x 00:08:54.305 12:36:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.305 12:36:53 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:54.305 12:36:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.305 12:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.305 12:36:53 -- common/autotest_common.sh@10 -- # set +x 00:08:54.870 12:36:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.870 12:36:53 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:54.870 12:36:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.870 12:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.870 12:36:53 -- common/autotest_common.sh@10 -- # set +x 00:08:55.128 12:36:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.128 12:36:53 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:55.128 12:36:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.128 12:36:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.128 12:36:53 -- common/autotest_common.sh@10 -- # set +x 00:08:55.385 12:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.386 12:36:54 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:55.386 12:36:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.386 12:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.386 12:36:54 -- common/autotest_common.sh@10 -- # set +x 00:08:55.643 12:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.643 12:36:54 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:55.643 12:36:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.643 12:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.643 12:36:54 -- common/autotest_common.sh@10 -- # set +x 00:08:55.900 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:55.900 12:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.900 12:36:54 -- target/connect_stress.sh@34 -- # kill -0 1110413 00:08:55.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1110413) - No such process 00:08:55.900 12:36:54 -- target/connect_stress.sh@38 -- # wait 1110413 00:08:55.900 12:36:54 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:55.900 12:36:54 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:55.901 12:36:54 -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:55.901 12:36:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:55.901 12:36:54 -- nvmf/common.sh@117 -- # sync 00:08:55.901 12:36:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:55.901 12:36:54 -- nvmf/common.sh@120 -- # set +e 00:08:55.901 12:36:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:55.901 12:36:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:55.901 rmmod nvme_tcp 00:08:55.901 rmmod nvme_fabrics 00:08:56.158 rmmod nvme_keyring 00:08:56.158 12:36:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.158 12:36:54 -- nvmf/common.sh@124 -- # set -e 00:08:56.158 12:36:54 -- nvmf/common.sh@125 -- # return 0 00:08:56.158 12:36:54 -- nvmf/common.sh@478 -- # '[' -n 1110328 ']' 00:08:56.158 12:36:54 -- nvmf/common.sh@479 -- # killprocess 1110328 00:08:56.158 12:36:54 -- common/autotest_common.sh@936 -- # '[' -z 1110328 ']' 00:08:56.158 12:36:54 -- common/autotest_common.sh@940 -- # kill -0 1110328 00:08:56.158 12:36:54 -- common/autotest_common.sh@941 -- # uname 00:08:56.158 12:36:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:56.158 12:36:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1110328 00:08:56.158 12:36:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:56.158 12:36:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:56.158 12:36:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1110328' 00:08:56.158 killing process with pid 1110328 00:08:56.158 12:36:55 -- common/autotest_common.sh@955 -- # kill 1110328 00:08:56.158 12:36:55 -- common/autotest_common.sh@960 -- # wait 1110328 00:08:56.417 12:36:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:56.417 12:36:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:56.417 12:36:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:56.417 12:36:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.417 12:36:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.417 12:36:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.417 12:36:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.417 12:36:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.320 12:36:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.320 00:08:58.320 real 0m16.514s 00:08:58.320 user 0m40.217s 00:08:58.320 sys 0m6.753s 00:08:58.320 12:36:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:58.320 12:36:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.320 ************************************ 00:08:58.320 END TEST nvmf_connect_stress 00:08:58.320 ************************************ 00:08:58.320 12:36:57 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:58.320 12:36:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:58.320 12:36:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.320 12:36:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.578 ************************************ 00:08:58.578 START TEST nvmf_fused_ordering 00:08:58.578 ************************************ 00:08:58.578 12:36:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:58.578 * Looking for test storage... 00:08:58.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.578 12:36:57 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.578 12:36:57 -- nvmf/common.sh@7 -- # uname -s 00:08:58.578 12:36:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.578 12:36:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.578 12:36:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.578 12:36:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.578 12:36:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.578 12:36:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.578 12:36:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.578 12:36:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.578 12:36:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.578 12:36:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.578 12:36:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:58.578 12:36:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:58.578 12:36:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.578 12:36:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.578 12:36:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.578 12:36:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.578 12:36:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.578 12:36:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.578 12:36:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.578 12:36:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.578 12:36:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.578 12:36:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.578 12:36:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.578 12:36:57 -- paths/export.sh@5 -- # export PATH 00:08:58.578 12:36:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.578 12:36:57 -- nvmf/common.sh@47 -- # : 0 00:08:58.578 12:36:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.578 12:36:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.578 12:36:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.578 12:36:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.578 12:36:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.578 12:36:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.578 12:36:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.578 12:36:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.578 12:36:57 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:58.579 12:36:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:58.579 12:36:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.579 12:36:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:58.579 12:36:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:58.579 12:36:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:58.579 12:36:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.579 12:36:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.579 12:36:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.579 12:36:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:58.579 12:36:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:58.579 12:36:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.579 12:36:57 -- common/autotest_common.sh@10 -- # set +x 00:09:01.110 12:36:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:01.110 12:36:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.110 12:36:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.110 12:36:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.110 12:36:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.110 12:36:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.110 12:36:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.110 12:36:59 -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.110 12:36:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.110 12:36:59 -- nvmf/common.sh@296 -- # e810=() 00:09:01.110 12:36:59 -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.110 12:36:59 -- nvmf/common.sh@297 -- # x722=() 00:09:01.110 12:36:59 -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.110 12:36:59 -- nvmf/common.sh@298 -- # mlx=() 00:09:01.110 12:36:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.110 12:36:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.110 12:36:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.110 12:36:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.110 12:36:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.110 12:36:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.110 12:36:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:01.110 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:01.110 12:36:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.110 12:36:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:01.110 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:01.110 12:36:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.110 12:36:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.110 12:36:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.110 12:36:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:01.110 12:36:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.110 12:36:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:01.110 Found net devices under 0000:82:00.0: cvl_0_0 00:09:01.110 12:36:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.110 12:36:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.110 12:36:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.110 12:36:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:01.110 12:36:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.110 12:36:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:01.110 Found net devices under 0000:82:00.1: cvl_0_1 00:09:01.110 12:36:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.110 12:36:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:01.110 12:36:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:01.110 12:36:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:01.110 12:36:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:01.110 12:36:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.110 12:36:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.110 12:36:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.110 12:36:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.110 12:36:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.110 12:36:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.110 12:36:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.110 12:36:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.110 12:36:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.110 12:36:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.110 12:37:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.110 12:37:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.110 12:37:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.110 12:37:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.110 12:37:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.110 12:37:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.110 12:37:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.110 12:37:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.110 12:37:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.110 12:37:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:09:01.110 00:09:01.110 --- 10.0.0.2 ping statistics --- 00:09:01.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.110 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:09:01.110 12:37:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:09:01.110 00:09:01.110 --- 10.0.0.1 ping statistics --- 00:09:01.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.110 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:09:01.110 12:37:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.110 12:37:00 -- nvmf/common.sh@411 -- # return 0 00:09:01.110 12:37:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:01.110 12:37:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.110 12:37:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:01.110 12:37:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:01.110 12:37:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.110 12:37:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:01.110 12:37:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:01.110 12:37:00 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:01.110 12:37:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:01.110 12:37:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:01.110 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.110 12:37:00 -- nvmf/common.sh@470 -- # nvmfpid=1113928 00:09:01.110 12:37:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:01.110 12:37:00 -- nvmf/common.sh@471 -- # waitforlisten 1113928 00:09:01.368 12:37:00 -- common/autotest_common.sh@817 -- # '[' -z 1113928 ']' 00:09:01.368 12:37:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.368 12:37:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:01.368 12:37:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.368 12:37:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:01.368 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.368 [2024-04-16 12:37:00.225477] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:09:01.368 [2024-04-16 12:37:00.225553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.368 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.368 [2024-04-16 12:37:00.307877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.368 [2024-04-16 12:37:00.414149] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.368 [2024-04-16 12:37:00.414210] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.368 [2024-04-16 12:37:00.414225] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.368 [2024-04-16 12:37:00.414237] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.368 [2024-04-16 12:37:00.414247] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.368 [2024-04-16 12:37:00.414276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.627 12:37:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:01.627 12:37:00 -- common/autotest_common.sh@850 -- # return 0 00:09:01.627 12:37:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:01.627 12:37:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:01.627 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.627 12:37:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.627 12:37:00 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.627 12:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.627 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.627 [2024-04-16 12:37:00.556163] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.627 12:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.627 12:37:00 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:01.627 12:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.627 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.627 12:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.627 12:37:00 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.627 12:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.627 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.627 [2024-04-16 12:37:00.572367] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.627 12:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.627 12:37:00 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:01.627 12:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.627 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.627 NULL1 00:09:01.627 12:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.627 12:37:00 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:01.627 12:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.627 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.627 12:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.627 12:37:00 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:01.627 12:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.627 12:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:01.627 12:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.627 12:37:00 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:01.627 [2024-04-16 12:37:00.617950] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:09:01.627 [2024-04-16 12:37:00.617991] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114079 ] 00:09:01.627 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.561 Attached to nqn.2016-06.io.spdk:cnode1 00:09:02.561 Namespace ID: 1 size: 1GB 00:09:02.561 fused_ordering(0) 00:09:02.561 fused_ordering(1) 00:09:02.561 fused_ordering(2) 00:09:02.561 fused_ordering(3) 00:09:02.561 fused_ordering(4) 00:09:02.561 fused_ordering(5) 00:09:02.561 fused_ordering(6) 00:09:02.561 fused_ordering(7) 00:09:02.561 fused_ordering(8) 00:09:02.561 fused_ordering(9) 00:09:02.561 fused_ordering(10) 00:09:02.561 fused_ordering(11) 00:09:02.561 fused_ordering(12) 00:09:02.561 fused_ordering(13) 00:09:02.561 fused_ordering(14) 00:09:02.561 fused_ordering(15) 00:09:02.561 fused_ordering(16) 00:09:02.561 fused_ordering(17) 00:09:02.561 fused_ordering(18) 00:09:02.561 fused_ordering(19) 00:09:02.561 fused_ordering(20) 00:09:02.561 fused_ordering(21) 00:09:02.561 fused_ordering(22) 00:09:02.561 fused_ordering(23) 00:09:02.561 fused_ordering(24) 00:09:02.561 fused_ordering(25) 00:09:02.561 fused_ordering(26) 00:09:02.561 fused_ordering(27) 00:09:02.561 fused_ordering(28) 00:09:02.561 fused_ordering(29) 00:09:02.561 fused_ordering(30) 00:09:02.561 fused_ordering(31) 00:09:02.561 fused_ordering(32) 00:09:02.561 fused_ordering(33) 00:09:02.561 fused_ordering(34) 00:09:02.561 fused_ordering(35) 00:09:02.561 fused_ordering(36) 00:09:02.561 fused_ordering(37) 00:09:02.561 fused_ordering(38) 00:09:02.561 fused_ordering(39) 00:09:02.561 fused_ordering(40) 00:09:02.561 fused_ordering(41) 00:09:02.561 fused_ordering(42) 00:09:02.561 fused_ordering(43) 00:09:02.561 fused_ordering(44) 00:09:02.561 fused_ordering(45) 00:09:02.561 fused_ordering(46) 00:09:02.561 fused_ordering(47) 00:09:02.561 fused_ordering(48) 00:09:02.561 fused_ordering(49) 00:09:02.561 fused_ordering(50) 00:09:02.561 fused_ordering(51) 00:09:02.561 fused_ordering(52) 00:09:02.561 fused_ordering(53) 00:09:02.561 fused_ordering(54) 00:09:02.561 fused_ordering(55) 00:09:02.561 fused_ordering(56) 00:09:02.561 fused_ordering(57) 00:09:02.561 fused_ordering(58) 00:09:02.561 fused_ordering(59) 00:09:02.561 fused_ordering(60) 00:09:02.561 fused_ordering(61) 00:09:02.561 fused_ordering(62) 00:09:02.561 fused_ordering(63) 00:09:02.561 fused_ordering(64) 00:09:02.561 fused_ordering(65) 00:09:02.561 fused_ordering(66) 00:09:02.561 fused_ordering(67) 00:09:02.561 fused_ordering(68) 00:09:02.561 fused_ordering(69) 00:09:02.561 fused_ordering(70) 00:09:02.561 fused_ordering(71) 00:09:02.561 fused_ordering(72) 00:09:02.561 fused_ordering(73) 00:09:02.561 fused_ordering(74) 00:09:02.561 fused_ordering(75) 00:09:02.561 fused_ordering(76) 00:09:02.561 fused_ordering(77) 00:09:02.561 fused_ordering(78) 00:09:02.561 fused_ordering(79) 00:09:02.561 fused_ordering(80) 00:09:02.561 fused_ordering(81) 00:09:02.561 fused_ordering(82) 00:09:02.561 fused_ordering(83) 00:09:02.561 fused_ordering(84) 00:09:02.561 fused_ordering(85) 00:09:02.561 fused_ordering(86) 00:09:02.561 fused_ordering(87) 00:09:02.562 fused_ordering(88) 00:09:02.562 fused_ordering(89) 00:09:02.562 fused_ordering(90) 00:09:02.562 fused_ordering(91) 00:09:02.562 fused_ordering(92) 00:09:02.562 fused_ordering(93) 00:09:02.562 fused_ordering(94) 00:09:02.562 fused_ordering(95) 00:09:02.562 fused_ordering(96) 00:09:02.562 fused_ordering(97) 00:09:02.562 fused_ordering(98) 00:09:02.562 fused_ordering(99) 00:09:02.562 fused_ordering(100) 00:09:02.562 fused_ordering(101) 00:09:02.562 fused_ordering(102) 00:09:02.562 fused_ordering(103) 00:09:02.562 fused_ordering(104) 00:09:02.562 fused_ordering(105) 00:09:02.562 fused_ordering(106) 00:09:02.562 fused_ordering(107) 00:09:02.562 fused_ordering(108) 00:09:02.562 fused_ordering(109) 00:09:02.562 fused_ordering(110) 00:09:02.562 fused_ordering(111) 00:09:02.562 fused_ordering(112) 00:09:02.562 fused_ordering(113) 00:09:02.562 fused_ordering(114) 00:09:02.562 fused_ordering(115) 00:09:02.562 fused_ordering(116) 00:09:02.562 fused_ordering(117) 00:09:02.562 fused_ordering(118) 00:09:02.562 fused_ordering(119) 00:09:02.562 fused_ordering(120) 00:09:02.562 fused_ordering(121) 00:09:02.562 fused_ordering(122) 00:09:02.562 fused_ordering(123) 00:09:02.562 fused_ordering(124) 00:09:02.562 fused_ordering(125) 00:09:02.562 fused_ordering(126) 00:09:02.562 fused_ordering(127) 00:09:02.562 fused_ordering(128) 00:09:02.562 fused_ordering(129) 00:09:02.562 fused_ordering(130) 00:09:02.562 fused_ordering(131) 00:09:02.562 fused_ordering(132) 00:09:02.562 fused_ordering(133) 00:09:02.562 fused_ordering(134) 00:09:02.562 fused_ordering(135) 00:09:02.562 fused_ordering(136) 00:09:02.562 fused_ordering(137) 00:09:02.562 fused_ordering(138) 00:09:02.562 fused_ordering(139) 00:09:02.562 fused_ordering(140) 00:09:02.562 fused_ordering(141) 00:09:02.562 fused_ordering(142) 00:09:02.562 fused_ordering(143) 00:09:02.562 fused_ordering(144) 00:09:02.562 fused_ordering(145) 00:09:02.562 fused_ordering(146) 00:09:02.562 fused_ordering(147) 00:09:02.562 fused_ordering(148) 00:09:02.562 fused_ordering(149) 00:09:02.562 fused_ordering(150) 00:09:02.562 fused_ordering(151) 00:09:02.562 fused_ordering(152) 00:09:02.562 fused_ordering(153) 00:09:02.562 fused_ordering(154) 00:09:02.562 fused_ordering(155) 00:09:02.562 fused_ordering(156) 00:09:02.562 fused_ordering(157) 00:09:02.562 fused_ordering(158) 00:09:02.562 fused_ordering(159) 00:09:02.562 fused_ordering(160) 00:09:02.562 fused_ordering(161) 00:09:02.562 fused_ordering(162) 00:09:02.562 fused_ordering(163) 00:09:02.562 fused_ordering(164) 00:09:02.562 fused_ordering(165) 00:09:02.562 fused_ordering(166) 00:09:02.562 fused_ordering(167) 00:09:02.562 fused_ordering(168) 00:09:02.562 fused_ordering(169) 00:09:02.562 fused_ordering(170) 00:09:02.562 fused_ordering(171) 00:09:02.562 fused_ordering(172) 00:09:02.562 fused_ordering(173) 00:09:02.562 fused_ordering(174) 00:09:02.562 fused_ordering(175) 00:09:02.562 fused_ordering(176) 00:09:02.562 fused_ordering(177) 00:09:02.562 fused_ordering(178) 00:09:02.562 fused_ordering(179) 00:09:02.562 fused_ordering(180) 00:09:02.562 fused_ordering(181) 00:09:02.562 fused_ordering(182) 00:09:02.562 fused_ordering(183) 00:09:02.562 fused_ordering(184) 00:09:02.562 fused_ordering(185) 00:09:02.562 fused_ordering(186) 00:09:02.562 fused_ordering(187) 00:09:02.562 fused_ordering(188) 00:09:02.562 fused_ordering(189) 00:09:02.562 fused_ordering(190) 00:09:02.562 fused_ordering(191) 00:09:02.562 fused_ordering(192) 00:09:02.562 fused_ordering(193) 00:09:02.562 fused_ordering(194) 00:09:02.562 fused_ordering(195) 00:09:02.562 fused_ordering(196) 00:09:02.562 fused_ordering(197) 00:09:02.562 fused_ordering(198) 00:09:02.562 fused_ordering(199) 00:09:02.562 fused_ordering(200) 00:09:02.562 fused_ordering(201) 00:09:02.562 fused_ordering(202) 00:09:02.562 fused_ordering(203) 00:09:02.562 fused_ordering(204) 00:09:02.562 fused_ordering(205) 00:09:02.821 fused_ordering(206) 00:09:02.821 fused_ordering(207) 00:09:02.821 fused_ordering(208) 00:09:02.821 fused_ordering(209) 00:09:02.821 fused_ordering(210) 00:09:02.821 fused_ordering(211) 00:09:02.821 fused_ordering(212) 00:09:02.821 fused_ordering(213) 00:09:02.821 fused_ordering(214) 00:09:02.821 fused_ordering(215) 00:09:02.821 fused_ordering(216) 00:09:02.821 fused_ordering(217) 00:09:02.821 fused_ordering(218) 00:09:02.821 fused_ordering(219) 00:09:02.821 fused_ordering(220) 00:09:02.821 fused_ordering(221) 00:09:02.821 fused_ordering(222) 00:09:02.821 fused_ordering(223) 00:09:02.821 fused_ordering(224) 00:09:02.821 fused_ordering(225) 00:09:02.821 fused_ordering(226) 00:09:02.821 fused_ordering(227) 00:09:02.821 fused_ordering(228) 00:09:02.821 fused_ordering(229) 00:09:02.821 fused_ordering(230) 00:09:02.821 fused_ordering(231) 00:09:02.821 fused_ordering(232) 00:09:02.821 fused_ordering(233) 00:09:02.821 fused_ordering(234) 00:09:02.821 fused_ordering(235) 00:09:02.821 fused_ordering(236) 00:09:02.821 fused_ordering(237) 00:09:02.821 fused_ordering(238) 00:09:02.821 fused_ordering(239) 00:09:02.821 fused_ordering(240) 00:09:02.821 fused_ordering(241) 00:09:02.821 fused_ordering(242) 00:09:02.821 fused_ordering(243) 00:09:02.821 fused_ordering(244) 00:09:02.821 fused_ordering(245) 00:09:02.821 fused_ordering(246) 00:09:02.821 fused_ordering(247) 00:09:02.821 fused_ordering(248) 00:09:02.821 fused_ordering(249) 00:09:02.821 fused_ordering(250) 00:09:02.821 fused_ordering(251) 00:09:02.821 fused_ordering(252) 00:09:02.821 fused_ordering(253) 00:09:02.821 fused_ordering(254) 00:09:02.821 fused_ordering(255) 00:09:02.821 fused_ordering(256) 00:09:02.821 fused_ordering(257) 00:09:02.821 fused_ordering(258) 00:09:02.821 fused_ordering(259) 00:09:02.821 fused_ordering(260) 00:09:02.821 fused_ordering(261) 00:09:02.821 fused_ordering(262) 00:09:02.821 fused_ordering(263) 00:09:02.821 fused_ordering(264) 00:09:02.821 fused_ordering(265) 00:09:02.821 fused_ordering(266) 00:09:02.821 fused_ordering(267) 00:09:02.821 fused_ordering(268) 00:09:02.821 fused_ordering(269) 00:09:02.821 fused_ordering(270) 00:09:02.821 fused_ordering(271) 00:09:02.821 fused_ordering(272) 00:09:02.821 fused_ordering(273) 00:09:02.821 fused_ordering(274) 00:09:02.821 fused_ordering(275) 00:09:02.821 fused_ordering(276) 00:09:02.821 fused_ordering(277) 00:09:02.821 fused_ordering(278) 00:09:02.821 fused_ordering(279) 00:09:02.821 fused_ordering(280) 00:09:02.821 fused_ordering(281) 00:09:02.821 fused_ordering(282) 00:09:02.821 fused_ordering(283) 00:09:02.821 fused_ordering(284) 00:09:02.821 fused_ordering(285) 00:09:02.821 fused_ordering(286) 00:09:02.821 fused_ordering(287) 00:09:02.821 fused_ordering(288) 00:09:02.821 fused_ordering(289) 00:09:02.821 fused_ordering(290) 00:09:02.821 fused_ordering(291) 00:09:02.821 fused_ordering(292) 00:09:02.821 fused_ordering(293) 00:09:02.821 fused_ordering(294) 00:09:02.821 fused_ordering(295) 00:09:02.821 fused_ordering(296) 00:09:02.821 fused_ordering(297) 00:09:02.821 fused_ordering(298) 00:09:02.821 fused_ordering(299) 00:09:02.821 fused_ordering(300) 00:09:02.821 fused_ordering(301) 00:09:02.821 fused_ordering(302) 00:09:02.821 fused_ordering(303) 00:09:02.821 fused_ordering(304) 00:09:02.821 fused_ordering(305) 00:09:02.821 fused_ordering(306) 00:09:02.821 fused_ordering(307) 00:09:02.821 fused_ordering(308) 00:09:02.821 fused_ordering(309) 00:09:02.821 fused_ordering(310) 00:09:02.821 fused_ordering(311) 00:09:02.821 fused_ordering(312) 00:09:02.821 fused_ordering(313) 00:09:02.821 fused_ordering(314) 00:09:02.821 fused_ordering(315) 00:09:02.821 fused_ordering(316) 00:09:02.821 fused_ordering(317) 00:09:02.821 fused_ordering(318) 00:09:02.821 fused_ordering(319) 00:09:02.821 fused_ordering(320) 00:09:02.821 fused_ordering(321) 00:09:02.821 fused_ordering(322) 00:09:02.821 fused_ordering(323) 00:09:02.821 fused_ordering(324) 00:09:02.821 fused_ordering(325) 00:09:02.821 fused_ordering(326) 00:09:02.821 fused_ordering(327) 00:09:02.821 fused_ordering(328) 00:09:02.821 fused_ordering(329) 00:09:02.821 fused_ordering(330) 00:09:02.821 fused_ordering(331) 00:09:02.821 fused_ordering(332) 00:09:02.821 fused_ordering(333) 00:09:02.821 fused_ordering(334) 00:09:02.821 fused_ordering(335) 00:09:02.821 fused_ordering(336) 00:09:02.822 fused_ordering(337) 00:09:02.822 fused_ordering(338) 00:09:02.822 fused_ordering(339) 00:09:02.822 fused_ordering(340) 00:09:02.822 fused_ordering(341) 00:09:02.822 fused_ordering(342) 00:09:02.822 fused_ordering(343) 00:09:02.822 fused_ordering(344) 00:09:02.822 fused_ordering(345) 00:09:02.822 fused_ordering(346) 00:09:02.822 fused_ordering(347) 00:09:02.822 fused_ordering(348) 00:09:02.822 fused_ordering(349) 00:09:02.822 fused_ordering(350) 00:09:02.822 fused_ordering(351) 00:09:02.822 fused_ordering(352) 00:09:02.822 fused_ordering(353) 00:09:02.822 fused_ordering(354) 00:09:02.822 fused_ordering(355) 00:09:02.822 fused_ordering(356) 00:09:02.822 fused_ordering(357) 00:09:02.822 fused_ordering(358) 00:09:02.822 fused_ordering(359) 00:09:02.822 fused_ordering(360) 00:09:02.822 fused_ordering(361) 00:09:02.822 fused_ordering(362) 00:09:02.822 fused_ordering(363) 00:09:02.822 fused_ordering(364) 00:09:02.822 fused_ordering(365) 00:09:02.822 fused_ordering(366) 00:09:02.822 fused_ordering(367) 00:09:02.822 fused_ordering(368) 00:09:02.822 fused_ordering(369) 00:09:02.822 fused_ordering(370) 00:09:02.822 fused_ordering(371) 00:09:02.822 fused_ordering(372) 00:09:02.822 fused_ordering(373) 00:09:02.822 fused_ordering(374) 00:09:02.822 fused_ordering(375) 00:09:02.822 fused_ordering(376) 00:09:02.822 fused_ordering(377) 00:09:02.822 fused_ordering(378) 00:09:02.822 fused_ordering(379) 00:09:02.822 fused_ordering(380) 00:09:02.822 fused_ordering(381) 00:09:02.822 fused_ordering(382) 00:09:02.822 fused_ordering(383) 00:09:02.822 fused_ordering(384) 00:09:02.822 fused_ordering(385) 00:09:02.822 fused_ordering(386) 00:09:02.822 fused_ordering(387) 00:09:02.822 fused_ordering(388) 00:09:02.822 fused_ordering(389) 00:09:02.822 fused_ordering(390) 00:09:02.822 fused_ordering(391) 00:09:02.822 fused_ordering(392) 00:09:02.822 fused_ordering(393) 00:09:02.822 fused_ordering(394) 00:09:02.822 fused_ordering(395) 00:09:02.822 fused_ordering(396) 00:09:02.822 fused_ordering(397) 00:09:02.822 fused_ordering(398) 00:09:02.822 fused_ordering(399) 00:09:02.822 fused_ordering(400) 00:09:02.822 fused_ordering(401) 00:09:02.822 fused_ordering(402) 00:09:02.822 fused_ordering(403) 00:09:02.822 fused_ordering(404) 00:09:02.822 fused_ordering(405) 00:09:02.822 fused_ordering(406) 00:09:02.822 fused_ordering(407) 00:09:02.822 fused_ordering(408) 00:09:02.822 fused_ordering(409) 00:09:02.822 fused_ordering(410) 00:09:03.458 fused_ordering(411) 00:09:03.458 fused_ordering(412) 00:09:03.458 fused_ordering(413) 00:09:03.458 fused_ordering(414) 00:09:03.458 fused_ordering(415) 00:09:03.458 fused_ordering(416) 00:09:03.458 fused_ordering(417) 00:09:03.458 fused_ordering(418) 00:09:03.458 fused_ordering(419) 00:09:03.458 fused_ordering(420) 00:09:03.458 fused_ordering(421) 00:09:03.458 fused_ordering(422) 00:09:03.458 fused_ordering(423) 00:09:03.458 fused_ordering(424) 00:09:03.458 fused_ordering(425) 00:09:03.458 fused_ordering(426) 00:09:03.458 fused_ordering(427) 00:09:03.458 fused_ordering(428) 00:09:03.458 fused_ordering(429) 00:09:03.458 fused_ordering(430) 00:09:03.458 fused_ordering(431) 00:09:03.458 fused_ordering(432) 00:09:03.458 fused_ordering(433) 00:09:03.458 fused_ordering(434) 00:09:03.458 fused_ordering(435) 00:09:03.458 fused_ordering(436) 00:09:03.458 fused_ordering(437) 00:09:03.458 fused_ordering(438) 00:09:03.458 fused_ordering(439) 00:09:03.458 fused_ordering(440) 00:09:03.458 fused_ordering(441) 00:09:03.458 fused_ordering(442) 00:09:03.458 fused_ordering(443) 00:09:03.458 fused_ordering(444) 00:09:03.458 fused_ordering(445) 00:09:03.458 fused_ordering(446) 00:09:03.458 fused_ordering(447) 00:09:03.458 fused_ordering(448) 00:09:03.458 fused_ordering(449) 00:09:03.458 fused_ordering(450) 00:09:03.458 fused_ordering(451) 00:09:03.458 fused_ordering(452) 00:09:03.458 fused_ordering(453) 00:09:03.458 fused_ordering(454) 00:09:03.458 fused_ordering(455) 00:09:03.458 fused_ordering(456) 00:09:03.458 fused_ordering(457) 00:09:03.458 fused_ordering(458) 00:09:03.458 fused_ordering(459) 00:09:03.458 fused_ordering(460) 00:09:03.458 fused_ordering(461) 00:09:03.458 fused_ordering(462) 00:09:03.458 fused_ordering(463) 00:09:03.458 fused_ordering(464) 00:09:03.458 fused_ordering(465) 00:09:03.458 fused_ordering(466) 00:09:03.458 fused_ordering(467) 00:09:03.458 fused_ordering(468) 00:09:03.458 fused_ordering(469) 00:09:03.458 fused_ordering(470) 00:09:03.458 fused_ordering(471) 00:09:03.458 fused_ordering(472) 00:09:03.458 fused_ordering(473) 00:09:03.458 fused_ordering(474) 00:09:03.458 fused_ordering(475) 00:09:03.458 fused_ordering(476) 00:09:03.458 fused_ordering(477) 00:09:03.458 fused_ordering(478) 00:09:03.458 fused_ordering(479) 00:09:03.458 fused_ordering(480) 00:09:03.458 fused_ordering(481) 00:09:03.458 fused_ordering(482) 00:09:03.458 fused_ordering(483) 00:09:03.458 fused_ordering(484) 00:09:03.458 fused_ordering(485) 00:09:03.458 fused_ordering(486) 00:09:03.458 fused_ordering(487) 00:09:03.458 fused_ordering(488) 00:09:03.458 fused_ordering(489) 00:09:03.458 fused_ordering(490) 00:09:03.458 fused_ordering(491) 00:09:03.458 fused_ordering(492) 00:09:03.458 fused_ordering(493) 00:09:03.458 fused_ordering(494) 00:09:03.458 fused_ordering(495) 00:09:03.458 fused_ordering(496) 00:09:03.458 fused_ordering(497) 00:09:03.458 fused_ordering(498) 00:09:03.458 fused_ordering(499) 00:09:03.458 fused_ordering(500) 00:09:03.458 fused_ordering(501) 00:09:03.458 fused_ordering(502) 00:09:03.458 fused_ordering(503) 00:09:03.458 fused_ordering(504) 00:09:03.458 fused_ordering(505) 00:09:03.458 fused_ordering(506) 00:09:03.458 fused_ordering(507) 00:09:03.458 fused_ordering(508) 00:09:03.458 fused_ordering(509) 00:09:03.458 fused_ordering(510) 00:09:03.458 fused_ordering(511) 00:09:03.458 fused_ordering(512) 00:09:03.458 fused_ordering(513) 00:09:03.458 fused_ordering(514) 00:09:03.458 fused_ordering(515) 00:09:03.458 fused_ordering(516) 00:09:03.458 fused_ordering(517) 00:09:03.458 fused_ordering(518) 00:09:03.458 fused_ordering(519) 00:09:03.458 fused_ordering(520) 00:09:03.458 fused_ordering(521) 00:09:03.458 fused_ordering(522) 00:09:03.458 fused_ordering(523) 00:09:03.458 fused_ordering(524) 00:09:03.458 fused_ordering(525) 00:09:03.458 fused_ordering(526) 00:09:03.458 fused_ordering(527) 00:09:03.458 fused_ordering(528) 00:09:03.458 fused_ordering(529) 00:09:03.458 fused_ordering(530) 00:09:03.458 fused_ordering(531) 00:09:03.458 fused_ordering(532) 00:09:03.458 fused_ordering(533) 00:09:03.458 fused_ordering(534) 00:09:03.458 fused_ordering(535) 00:09:03.458 fused_ordering(536) 00:09:03.458 fused_ordering(537) 00:09:03.458 fused_ordering(538) 00:09:03.458 fused_ordering(539) 00:09:03.458 fused_ordering(540) 00:09:03.458 fused_ordering(541) 00:09:03.458 fused_ordering(542) 00:09:03.458 fused_ordering(543) 00:09:03.458 fused_ordering(544) 00:09:03.458 fused_ordering(545) 00:09:03.458 fused_ordering(546) 00:09:03.458 fused_ordering(547) 00:09:03.458 fused_ordering(548) 00:09:03.458 fused_ordering(549) 00:09:03.458 fused_ordering(550) 00:09:03.458 fused_ordering(551) 00:09:03.458 fused_ordering(552) 00:09:03.458 fused_ordering(553) 00:09:03.458 fused_ordering(554) 00:09:03.458 fused_ordering(555) 00:09:03.458 fused_ordering(556) 00:09:03.458 fused_ordering(557) 00:09:03.458 fused_ordering(558) 00:09:03.458 fused_ordering(559) 00:09:03.458 fused_ordering(560) 00:09:03.458 fused_ordering(561) 00:09:03.458 fused_ordering(562) 00:09:03.458 fused_ordering(563) 00:09:03.458 fused_ordering(564) 00:09:03.458 fused_ordering(565) 00:09:03.458 fused_ordering(566) 00:09:03.458 fused_ordering(567) 00:09:03.458 fused_ordering(568) 00:09:03.458 fused_ordering(569) 00:09:03.458 fused_ordering(570) 00:09:03.458 fused_ordering(571) 00:09:03.458 fused_ordering(572) 00:09:03.458 fused_ordering(573) 00:09:03.458 fused_ordering(574) 00:09:03.458 fused_ordering(575) 00:09:03.458 fused_ordering(576) 00:09:03.458 fused_ordering(577) 00:09:03.458 fused_ordering(578) 00:09:03.458 fused_ordering(579) 00:09:03.458 fused_ordering(580) 00:09:03.458 fused_ordering(581) 00:09:03.458 fused_ordering(582) 00:09:03.458 fused_ordering(583) 00:09:03.458 fused_ordering(584) 00:09:03.458 fused_ordering(585) 00:09:03.458 fused_ordering(586) 00:09:03.458 fused_ordering(587) 00:09:03.458 fused_ordering(588) 00:09:03.458 fused_ordering(589) 00:09:03.458 fused_ordering(590) 00:09:03.458 fused_ordering(591) 00:09:03.458 fused_ordering(592) 00:09:03.458 fused_ordering(593) 00:09:03.458 fused_ordering(594) 00:09:03.458 fused_ordering(595) 00:09:03.458 fused_ordering(596) 00:09:03.458 fused_ordering(597) 00:09:03.458 fused_ordering(598) 00:09:03.458 fused_ordering(599) 00:09:03.458 fused_ordering(600) 00:09:03.458 fused_ordering(601) 00:09:03.458 fused_ordering(602) 00:09:03.458 fused_ordering(603) 00:09:03.458 fused_ordering(604) 00:09:03.458 fused_ordering(605) 00:09:03.458 fused_ordering(606) 00:09:03.458 fused_ordering(607) 00:09:03.458 fused_ordering(608) 00:09:03.458 fused_ordering(609) 00:09:03.458 fused_ordering(610) 00:09:03.458 fused_ordering(611) 00:09:03.458 fused_ordering(612) 00:09:03.458 fused_ordering(613) 00:09:03.458 fused_ordering(614) 00:09:03.458 fused_ordering(615) 00:09:04.037 fused_ordering(616) 00:09:04.037 fused_ordering(617) 00:09:04.037 fused_ordering(618) 00:09:04.037 fused_ordering(619) 00:09:04.037 fused_ordering(620) 00:09:04.037 fused_ordering(621) 00:09:04.037 fused_ordering(622) 00:09:04.037 fused_ordering(623) 00:09:04.037 fused_ordering(624) 00:09:04.037 fused_ordering(625) 00:09:04.037 fused_ordering(626) 00:09:04.037 fused_ordering(627) 00:09:04.037 fused_ordering(628) 00:09:04.037 fused_ordering(629) 00:09:04.037 fused_ordering(630) 00:09:04.037 fused_ordering(631) 00:09:04.037 fused_ordering(632) 00:09:04.037 fused_ordering(633) 00:09:04.037 fused_ordering(634) 00:09:04.037 fused_ordering(635) 00:09:04.037 fused_ordering(636) 00:09:04.037 fused_ordering(637) 00:09:04.037 fused_ordering(638) 00:09:04.037 fused_ordering(639) 00:09:04.037 fused_ordering(640) 00:09:04.037 fused_ordering(641) 00:09:04.037 fused_ordering(642) 00:09:04.037 fused_ordering(643) 00:09:04.037 fused_ordering(644) 00:09:04.037 fused_ordering(645) 00:09:04.037 fused_ordering(646) 00:09:04.037 fused_ordering(647) 00:09:04.037 fused_ordering(648) 00:09:04.037 fused_ordering(649) 00:09:04.037 fused_ordering(650) 00:09:04.037 fused_ordering(651) 00:09:04.037 fused_ordering(652) 00:09:04.037 fused_ordering(653) 00:09:04.037 fused_ordering(654) 00:09:04.037 fused_ordering(655) 00:09:04.037 fused_ordering(656) 00:09:04.037 fused_ordering(657) 00:09:04.037 fused_ordering(658) 00:09:04.037 fused_ordering(659) 00:09:04.037 fused_ordering(660) 00:09:04.037 fused_ordering(661) 00:09:04.037 fused_ordering(662) 00:09:04.037 fused_ordering(663) 00:09:04.037 fused_ordering(664) 00:09:04.037 fused_ordering(665) 00:09:04.037 fused_ordering(666) 00:09:04.037 fused_ordering(667) 00:09:04.037 fused_ordering(668) 00:09:04.037 fused_ordering(669) 00:09:04.037 fused_ordering(670) 00:09:04.037 fused_ordering(671) 00:09:04.037 fused_ordering(672) 00:09:04.037 fused_ordering(673) 00:09:04.037 fused_ordering(674) 00:09:04.037 fused_ordering(675) 00:09:04.037 fused_ordering(676) 00:09:04.037 fused_ordering(677) 00:09:04.037 fused_ordering(678) 00:09:04.037 fused_ordering(679) 00:09:04.037 fused_ordering(680) 00:09:04.037 fused_ordering(681) 00:09:04.037 fused_ordering(682) 00:09:04.037 fused_ordering(683) 00:09:04.037 fused_ordering(684) 00:09:04.037 fused_ordering(685) 00:09:04.037 fused_ordering(686) 00:09:04.037 fused_ordering(687) 00:09:04.037 fused_ordering(688) 00:09:04.037 fused_ordering(689) 00:09:04.037 fused_ordering(690) 00:09:04.037 fused_ordering(691) 00:09:04.037 fused_ordering(692) 00:09:04.037 fused_ordering(693) 00:09:04.037 fused_ordering(694) 00:09:04.037 fused_ordering(695) 00:09:04.037 fused_ordering(696) 00:09:04.037 fused_ordering(697) 00:09:04.037 fused_ordering(698) 00:09:04.037 fused_ordering(699) 00:09:04.037 fused_ordering(700) 00:09:04.037 fused_ordering(701) 00:09:04.037 fused_ordering(702) 00:09:04.037 fused_ordering(703) 00:09:04.037 fused_ordering(704) 00:09:04.037 fused_ordering(705) 00:09:04.037 fused_ordering(706) 00:09:04.037 fused_ordering(707) 00:09:04.037 fused_ordering(708) 00:09:04.037 fused_ordering(709) 00:09:04.037 fused_ordering(710) 00:09:04.037 fused_ordering(711) 00:09:04.037 fused_ordering(712) 00:09:04.037 fused_ordering(713) 00:09:04.037 fused_ordering(714) 00:09:04.037 fused_ordering(715) 00:09:04.037 fused_ordering(716) 00:09:04.037 fused_ordering(717) 00:09:04.037 fused_ordering(718) 00:09:04.037 fused_ordering(719) 00:09:04.037 fused_ordering(720) 00:09:04.037 fused_ordering(721) 00:09:04.037 fused_ordering(722) 00:09:04.037 fused_ordering(723) 00:09:04.037 fused_ordering(724) 00:09:04.037 fused_ordering(725) 00:09:04.037 fused_ordering(726) 00:09:04.037 fused_ordering(727) 00:09:04.037 fused_ordering(728) 00:09:04.037 fused_ordering(729) 00:09:04.037 fused_ordering(730) 00:09:04.037 fused_ordering(731) 00:09:04.037 fused_ordering(732) 00:09:04.037 fused_ordering(733) 00:09:04.037 fused_ordering(734) 00:09:04.037 fused_ordering(735) 00:09:04.037 fused_ordering(736) 00:09:04.037 fused_ordering(737) 00:09:04.037 fused_ordering(738) 00:09:04.037 fused_ordering(739) 00:09:04.037 fused_ordering(740) 00:09:04.037 fused_ordering(741) 00:09:04.037 fused_ordering(742) 00:09:04.037 fused_ordering(743) 00:09:04.037 fused_ordering(744) 00:09:04.037 fused_ordering(745) 00:09:04.037 fused_ordering(746) 00:09:04.037 fused_ordering(747) 00:09:04.037 fused_ordering(748) 00:09:04.037 fused_ordering(749) 00:09:04.037 fused_ordering(750) 00:09:04.037 fused_ordering(751) 00:09:04.037 fused_ordering(752) 00:09:04.037 fused_ordering(753) 00:09:04.037 fused_ordering(754) 00:09:04.037 fused_ordering(755) 00:09:04.037 fused_ordering(756) 00:09:04.037 fused_ordering(757) 00:09:04.037 fused_ordering(758) 00:09:04.037 fused_ordering(759) 00:09:04.037 fused_ordering(760) 00:09:04.037 fused_ordering(761) 00:09:04.037 fused_ordering(762) 00:09:04.037 fused_ordering(763) 00:09:04.037 fused_ordering(764) 00:09:04.037 fused_ordering(765) 00:09:04.037 fused_ordering(766) 00:09:04.037 fused_ordering(767) 00:09:04.037 fused_ordering(768) 00:09:04.037 fused_ordering(769) 00:09:04.037 fused_ordering(770) 00:09:04.037 fused_ordering(771) 00:09:04.037 fused_ordering(772) 00:09:04.037 fused_ordering(773) 00:09:04.037 fused_ordering(774) 00:09:04.037 fused_ordering(775) 00:09:04.037 fused_ordering(776) 00:09:04.037 fused_ordering(777) 00:09:04.037 fused_ordering(778) 00:09:04.037 fused_ordering(779) 00:09:04.037 fused_ordering(780) 00:09:04.037 fused_ordering(781) 00:09:04.037 fused_ordering(782) 00:09:04.037 fused_ordering(783) 00:09:04.037 fused_ordering(784) 00:09:04.037 fused_ordering(785) 00:09:04.037 fused_ordering(786) 00:09:04.037 fused_ordering(787) 00:09:04.037 fused_ordering(788) 00:09:04.037 fused_ordering(789) 00:09:04.037 fused_ordering(790) 00:09:04.037 fused_ordering(791) 00:09:04.037 fused_ordering(792) 00:09:04.037 fused_ordering(793) 00:09:04.037 fused_ordering(794) 00:09:04.037 fused_ordering(795) 00:09:04.037 fused_ordering(796) 00:09:04.037 fused_ordering(797) 00:09:04.037 fused_ordering(798) 00:09:04.037 fused_ordering(799) 00:09:04.037 fused_ordering(800) 00:09:04.037 fused_ordering(801) 00:09:04.037 fused_ordering(802) 00:09:04.037 fused_ordering(803) 00:09:04.037 fused_ordering(804) 00:09:04.037 fused_ordering(805) 00:09:04.037 fused_ordering(806) 00:09:04.037 fused_ordering(807) 00:09:04.037 fused_ordering(808) 00:09:04.037 fused_ordering(809) 00:09:04.037 fused_ordering(810) 00:09:04.037 fused_ordering(811) 00:09:04.037 fused_ordering(812) 00:09:04.037 fused_ordering(813) 00:09:04.037 fused_ordering(814) 00:09:04.037 fused_ordering(815) 00:09:04.037 fused_ordering(816) 00:09:04.037 fused_ordering(817) 00:09:04.037 fused_ordering(818) 00:09:04.037 fused_ordering(819) 00:09:04.037 fused_ordering(820) 00:09:04.975 fused_ordering(821) 00:09:04.975 fused_ordering(822) 00:09:04.975 fused_ordering(823) 00:09:04.975 fused_ordering(824) 00:09:04.975 fused_ordering(825) 00:09:04.975 fused_ordering(826) 00:09:04.975 fused_ordering(827) 00:09:04.975 fused_ordering(828) 00:09:04.975 fused_ordering(829) 00:09:04.975 fused_ordering(830) 00:09:04.975 fused_ordering(831) 00:09:04.975 fused_ordering(832) 00:09:04.975 fused_ordering(833) 00:09:04.975 fused_ordering(834) 00:09:04.975 fused_ordering(835) 00:09:04.975 fused_ordering(836) 00:09:04.975 fused_ordering(837) 00:09:04.975 fused_ordering(838) 00:09:04.975 fused_ordering(839) 00:09:04.975 fused_ordering(840) 00:09:04.975 fused_ordering(841) 00:09:04.975 fused_ordering(842) 00:09:04.975 fused_ordering(843) 00:09:04.975 fused_ordering(844) 00:09:04.975 fused_ordering(845) 00:09:04.975 fused_ordering(846) 00:09:04.975 fused_ordering(847) 00:09:04.975 fused_ordering(848) 00:09:04.975 fused_ordering(849) 00:09:04.975 fused_ordering(850) 00:09:04.975 fused_ordering(851) 00:09:04.975 fused_ordering(852) 00:09:04.975 fused_ordering(853) 00:09:04.975 fused_ordering(854) 00:09:04.975 fused_ordering(855) 00:09:04.975 fused_ordering(856) 00:09:04.975 fused_ordering(857) 00:09:04.975 fused_ordering(858) 00:09:04.975 fused_ordering(859) 00:09:04.975 fused_ordering(860) 00:09:04.975 fused_ordering(861) 00:09:04.975 fused_ordering(862) 00:09:04.975 fused_ordering(863) 00:09:04.975 fused_ordering(864) 00:09:04.975 fused_ordering(865) 00:09:04.975 fused_ordering(866) 00:09:04.975 fused_ordering(867) 00:09:04.975 fused_ordering(868) 00:09:04.975 fused_ordering(869) 00:09:04.975 fused_ordering(870) 00:09:04.975 fused_ordering(871) 00:09:04.975 fused_ordering(872) 00:09:04.975 fused_ordering(873) 00:09:04.975 fused_ordering(874) 00:09:04.975 fused_ordering(875) 00:09:04.975 fused_ordering(876) 00:09:04.975 fused_ordering(877) 00:09:04.975 fused_ordering(878) 00:09:04.975 fused_ordering(879) 00:09:04.975 fused_ordering(880) 00:09:04.975 fused_ordering(881) 00:09:04.975 fused_ordering(882) 00:09:04.975 fused_ordering(883) 00:09:04.975 fused_ordering(884) 00:09:04.975 fused_ordering(885) 00:09:04.975 fused_ordering(886) 00:09:04.975 fused_ordering(887) 00:09:04.975 fused_ordering(888) 00:09:04.975 fused_ordering(889) 00:09:04.975 fused_ordering(890) 00:09:04.975 fused_ordering(891) 00:09:04.975 fused_ordering(892) 00:09:04.975 fused_ordering(893) 00:09:04.975 fused_ordering(894) 00:09:04.975 fused_ordering(895) 00:09:04.975 fused_ordering(896) 00:09:04.975 fused_ordering(897) 00:09:04.975 fused_ordering(898) 00:09:04.975 fused_ordering(899) 00:09:04.975 fused_ordering(900) 00:09:04.975 fused_ordering(901) 00:09:04.975 fused_ordering(902) 00:09:04.975 fused_ordering(903) 00:09:04.975 fused_ordering(904) 00:09:04.975 fused_ordering(905) 00:09:04.975 fused_ordering(906) 00:09:04.975 fused_ordering(907) 00:09:04.975 fused_ordering(908) 00:09:04.975 fused_ordering(909) 00:09:04.975 fused_ordering(910) 00:09:04.975 fused_ordering(911) 00:09:04.975 fused_ordering(912) 00:09:04.975 fused_ordering(913) 00:09:04.975 fused_ordering(914) 00:09:04.975 fused_ordering(915) 00:09:04.975 fused_ordering(916) 00:09:04.975 fused_ordering(917) 00:09:04.975 fused_ordering(918) 00:09:04.975 fused_ordering(919) 00:09:04.975 fused_ordering(920) 00:09:04.975 fused_ordering(921) 00:09:04.975 fused_ordering(922) 00:09:04.975 fused_ordering(923) 00:09:04.975 fused_ordering(924) 00:09:04.975 fused_ordering(925) 00:09:04.975 fused_ordering(926) 00:09:04.975 fused_ordering(927) 00:09:04.975 fused_ordering(928) 00:09:04.975 fused_ordering(929) 00:09:04.975 fused_ordering(930) 00:09:04.975 fused_ordering(931) 00:09:04.975 fused_ordering(932) 00:09:04.975 fused_ordering(933) 00:09:04.975 fused_ordering(934) 00:09:04.975 fused_ordering(935) 00:09:04.975 fused_ordering(936) 00:09:04.975 fused_ordering(937) 00:09:04.975 fused_ordering(938) 00:09:04.975 fused_ordering(939) 00:09:04.975 fused_ordering(940) 00:09:04.975 fused_ordering(941) 00:09:04.975 fused_ordering(942) 00:09:04.975 fused_ordering(943) 00:09:04.975 fused_ordering(944) 00:09:04.975 fused_ordering(945) 00:09:04.975 fused_ordering(946) 00:09:04.975 fused_ordering(947) 00:09:04.975 fused_ordering(948) 00:09:04.975 fused_ordering(949) 00:09:04.975 fused_ordering(950) 00:09:04.975 fused_ordering(951) 00:09:04.975 fused_ordering(952) 00:09:04.975 fused_ordering(953) 00:09:04.975 fused_ordering(954) 00:09:04.975 fused_ordering(955) 00:09:04.975 fused_ordering(956) 00:09:04.975 fused_ordering(957) 00:09:04.975 fused_ordering(958) 00:09:04.975 fused_ordering(959) 00:09:04.975 fused_ordering(960) 00:09:04.975 fused_ordering(961) 00:09:04.975 fused_ordering(962) 00:09:04.975 fused_ordering(963) 00:09:04.975 fused_ordering(964) 00:09:04.975 fused_ordering(965) 00:09:04.975 fused_ordering(966) 00:09:04.975 fused_ordering(967) 00:09:04.975 fused_ordering(968) 00:09:04.975 fused_ordering(969) 00:09:04.975 fused_ordering(970) 00:09:04.975 fused_ordering(971) 00:09:04.975 fused_ordering(972) 00:09:04.975 fused_ordering(973) 00:09:04.975 fused_ordering(974) 00:09:04.975 fused_ordering(975) 00:09:04.975 fused_ordering(976) 00:09:04.975 fused_ordering(977) 00:09:04.975 fused_ordering(978) 00:09:04.975 fused_ordering(979) 00:09:04.975 fused_ordering(980) 00:09:04.975 fused_ordering(981) 00:09:04.975 fused_ordering(982) 00:09:04.975 fused_ordering(983) 00:09:04.975 fused_ordering(984) 00:09:04.975 fused_ordering(985) 00:09:04.975 fused_ordering(986) 00:09:04.975 fused_ordering(987) 00:09:04.975 fused_ordering(988) 00:09:04.975 fused_ordering(989) 00:09:04.975 fused_ordering(990) 00:09:04.975 fused_ordering(991) 00:09:04.975 fused_ordering(992) 00:09:04.975 fused_ordering(993) 00:09:04.975 fused_ordering(994) 00:09:04.975 fused_ordering(995) 00:09:04.975 fused_ordering(996) 00:09:04.975 fused_ordering(997) 00:09:04.976 fused_ordering(998) 00:09:04.976 fused_ordering(999) 00:09:04.976 fused_ordering(1000) 00:09:04.976 fused_ordering(1001) 00:09:04.976 fused_ordering(1002) 00:09:04.976 fused_ordering(1003) 00:09:04.976 fused_ordering(1004) 00:09:04.976 fused_ordering(1005) 00:09:04.976 fused_ordering(1006) 00:09:04.976 fused_ordering(1007) 00:09:04.976 fused_ordering(1008) 00:09:04.976 fused_ordering(1009) 00:09:04.976 fused_ordering(1010) 00:09:04.976 fused_ordering(1011) 00:09:04.976 fused_ordering(1012) 00:09:04.976 fused_ordering(1013) 00:09:04.976 fused_ordering(1014) 00:09:04.976 fused_ordering(1015) 00:09:04.976 fused_ordering(1016) 00:09:04.976 fused_ordering(1017) 00:09:04.976 fused_ordering(1018) 00:09:04.976 fused_ordering(1019) 00:09:04.976 fused_ordering(1020) 00:09:04.976 fused_ordering(1021) 00:09:04.976 fused_ordering(1022) 00:09:04.976 fused_ordering(1023) 00:09:04.976 12:37:03 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:04.976 12:37:03 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:04.976 12:37:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:04.976 12:37:03 -- nvmf/common.sh@117 -- # sync 00:09:04.976 12:37:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:04.976 12:37:03 -- nvmf/common.sh@120 -- # set +e 00:09:04.976 12:37:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:04.976 12:37:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:04.976 rmmod nvme_tcp 00:09:04.976 rmmod nvme_fabrics 00:09:04.976 rmmod nvme_keyring 00:09:04.976 12:37:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:04.976 12:37:03 -- nvmf/common.sh@124 -- # set -e 00:09:04.976 12:37:03 -- nvmf/common.sh@125 -- # return 0 00:09:04.976 12:37:03 -- nvmf/common.sh@478 -- # '[' -n 1113928 ']' 00:09:04.976 12:37:03 -- nvmf/common.sh@479 -- # killprocess 1113928 00:09:04.976 12:37:03 -- common/autotest_common.sh@936 -- # '[' -z 1113928 ']' 00:09:04.976 12:37:03 -- common/autotest_common.sh@940 -- # kill -0 1113928 00:09:04.976 12:37:03 -- common/autotest_common.sh@941 -- # uname 00:09:04.976 12:37:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:04.976 12:37:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1113928 00:09:04.976 12:37:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:04.976 12:37:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:04.976 12:37:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1113928' 00:09:04.976 killing process with pid 1113928 00:09:04.976 12:37:03 -- common/autotest_common.sh@955 -- # kill 1113928 00:09:04.976 12:37:03 -- common/autotest_common.sh@960 -- # wait 1113928 00:09:05.234 12:37:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:05.234 12:37:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:05.234 12:37:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:05.234 12:37:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.234 12:37:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.234 12:37:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.234 12:37:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.234 12:37:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.140 12:37:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.140 00:09:07.140 real 0m8.737s 00:09:07.140 user 0m5.816s 00:09:07.140 sys 0m4.390s 00:09:07.140 12:37:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:07.140 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:09:07.140 ************************************ 00:09:07.140 END TEST nvmf_fused_ordering 00:09:07.140 ************************************ 00:09:07.398 12:37:06 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:07.398 12:37:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:07.398 12:37:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.398 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:09:07.398 ************************************ 00:09:07.398 START TEST nvmf_delete_subsystem 00:09:07.398 ************************************ 00:09:07.398 12:37:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:07.398 * Looking for test storage... 00:09:07.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.398 12:37:06 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.398 12:37:06 -- nvmf/common.sh@7 -- # uname -s 00:09:07.398 12:37:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.398 12:37:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.398 12:37:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.398 12:37:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.398 12:37:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.398 12:37:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.398 12:37:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.398 12:37:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.398 12:37:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.398 12:37:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.398 12:37:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:07.398 12:37:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:07.398 12:37:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.398 12:37:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.398 12:37:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.398 12:37:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.398 12:37:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.398 12:37:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.398 12:37:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.398 12:37:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.398 12:37:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.398 12:37:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.398 12:37:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.398 12:37:06 -- paths/export.sh@5 -- # export PATH 00:09:07.398 12:37:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.398 12:37:06 -- nvmf/common.sh@47 -- # : 0 00:09:07.398 12:37:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.398 12:37:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.398 12:37:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.398 12:37:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.398 12:37:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.398 12:37:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.398 12:37:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.398 12:37:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.398 12:37:06 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:07.398 12:37:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:07.398 12:37:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.399 12:37:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:07.399 12:37:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:07.399 12:37:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:07.399 12:37:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.399 12:37:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.399 12:37:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.399 12:37:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:07.399 12:37:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:07.399 12:37:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:07.399 12:37:06 -- common/autotest_common.sh@10 -- # set +x 00:09:09.929 12:37:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:09.929 12:37:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.929 12:37:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.929 12:37:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.929 12:37:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.929 12:37:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.929 12:37:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.929 12:37:08 -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.929 12:37:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.929 12:37:08 -- nvmf/common.sh@296 -- # e810=() 00:09:09.929 12:37:08 -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.929 12:37:08 -- nvmf/common.sh@297 -- # x722=() 00:09:09.929 12:37:08 -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.929 12:37:08 -- nvmf/common.sh@298 -- # mlx=() 00:09:09.929 12:37:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.929 12:37:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.929 12:37:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.929 12:37:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.929 12:37:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.929 12:37:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.929 12:37:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:09.929 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:09.929 12:37:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.929 12:37:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:09.929 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:09.929 12:37:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.929 12:37:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.929 12:37:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.188 12:37:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.188 12:37:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:10.188 12:37:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.188 12:37:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:10.188 Found net devices under 0000:82:00.0: cvl_0_0 00:09:10.188 12:37:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.188 12:37:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.188 12:37:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.188 12:37:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:10.188 12:37:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.188 12:37:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:10.188 Found net devices under 0000:82:00.1: cvl_0_1 00:09:10.188 12:37:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.188 12:37:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:10.188 12:37:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:10.188 12:37:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:10.188 12:37:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:10.188 12:37:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:10.188 12:37:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.188 12:37:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.188 12:37:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.188 12:37:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.188 12:37:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.188 12:37:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.188 12:37:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.188 12:37:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.188 12:37:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.188 12:37:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.188 12:37:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.188 12:37:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.188 12:37:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.188 12:37:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.188 12:37:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.188 12:37:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.188 12:37:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.188 12:37:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.188 12:37:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.188 12:37:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:09:10.188 00:09:10.188 --- 10.0.0.2 ping statistics --- 00:09:10.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.188 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:10.188 12:37:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:09:10.188 00:09:10.188 --- 10.0.0.1 ping statistics --- 00:09:10.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.188 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:10.188 12:37:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.188 12:37:09 -- nvmf/common.sh@411 -- # return 0 00:09:10.188 12:37:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:10.188 12:37:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.188 12:37:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:10.188 12:37:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:10.188 12:37:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.188 12:37:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:10.188 12:37:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:10.188 12:37:09 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:10.188 12:37:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:10.188 12:37:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:10.188 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.188 12:37:09 -- nvmf/common.sh@470 -- # nvmfpid=1116707 00:09:10.188 12:37:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:10.188 12:37:09 -- nvmf/common.sh@471 -- # waitforlisten 1116707 00:09:10.188 12:37:09 -- common/autotest_common.sh@817 -- # '[' -z 1116707 ']' 00:09:10.188 12:37:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.188 12:37:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:10.188 12:37:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.188 12:37:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:10.188 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.188 [2024-04-16 12:37:09.213159] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:09:10.188 [2024-04-16 12:37:09.213261] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.446 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.446 [2024-04-16 12:37:09.300066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:10.446 [2024-04-16 12:37:09.407810] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.446 [2024-04-16 12:37:09.407887] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.446 [2024-04-16 12:37:09.407902] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.446 [2024-04-16 12:37:09.407914] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.446 [2024-04-16 12:37:09.407939] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.446 [2024-04-16 12:37:09.408029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.446 [2024-04-16 12:37:09.408034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.705 12:37:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:10.705 12:37:09 -- common/autotest_common.sh@850 -- # return 0 00:09:10.705 12:37:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:10.705 12:37:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:10.705 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.705 12:37:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.705 12:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.705 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.705 [2024-04-16 12:37:09.555274] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.705 12:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:10.705 12:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.705 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.705 12:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.705 12:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.705 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.705 [2024-04-16 12:37:09.571462] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.705 12:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:10.705 12:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.705 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.705 NULL1 00:09:10.705 12:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:10.705 12:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.705 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.705 Delay0 00:09:10.705 12:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.705 12:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.705 12:37:09 -- common/autotest_common.sh@10 -- # set +x 00:09:10.705 12:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@28 -- # perf_pid=1116740 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:10.705 12:37:09 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:10.705 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.705 [2024-04-16 12:37:09.646157] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:12.600 12:37:11 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.600 12:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.600 12:37:11 -- common/autotest_common.sh@10 -- # set +x 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 [2024-04-16 12:37:11.776651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112fe90 is same with the state(5) to be set 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 starting I/O failed: -6 00:09:12.859 [2024-04-16 12:37:11.777464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ed800c250 is same with the state(5) to be set 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.859 Write completed with error (sct=0, sc=8) 00:09:12.859 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Write completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:12.860 Read completed with error (sct=0, sc=8) 00:09:13.803 [2024-04-16 12:37:12.744583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131ad0 is same with the state(5) to be set 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 [2024-04-16 12:37:12.777115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130710 is same with the state(5) to be set 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 [2024-04-16 12:37:12.778559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ed800bf90 is same with the state(5) to be set 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 [2024-04-16 12:37:12.780133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ed800c510 is same with the state(5) to be set 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Write completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 Read completed with error (sct=0, sc=8) 00:09:13.803 [2024-04-16 12:37:12.780585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130020 is same with the state(5) to be set 00:09:13.803 [2024-04-16 12:37:12.781586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1131ad0 (9): Bad file descriptor 00:09:13.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:13.803 12:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.803 12:37:12 -- target/delete_subsystem.sh@34 -- # delay=0 00:09:13.803 12:37:12 -- target/delete_subsystem.sh@35 -- # kill -0 1116740 00:09:13.803 12:37:12 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:13.803 Initializing NVMe Controllers 00:09:13.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.803 Controller IO queue size 128, less than required. 00:09:13.803 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:13.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:13.803 Initialization complete. Launching workers. 00:09:13.803 ======================================================== 00:09:13.803 Latency(us) 00:09:13.803 Device Information : IOPS MiB/s Average min max 00:09:13.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.81 0.08 909528.32 590.63 1044522.09 00:09:13.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.81 0.08 938780.59 375.57 2001853.25 00:09:13.803 ======================================================== 00:09:13.803 Total : 326.62 0.16 924110.00 375.57 2001853.25 00:09:13.803 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@35 -- # kill -0 1116740 00:09:14.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1116740) - No such process 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@45 -- # NOT wait 1116740 00:09:14.368 12:37:13 -- common/autotest_common.sh@638 -- # local es=0 00:09:14.368 12:37:13 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1116740 00:09:14.368 12:37:13 -- common/autotest_common.sh@626 -- # local arg=wait 00:09:14.368 12:37:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.368 12:37:13 -- common/autotest_common.sh@630 -- # type -t wait 00:09:14.368 12:37:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.368 12:37:13 -- common/autotest_common.sh@641 -- # wait 1116740 00:09:14.368 12:37:13 -- common/autotest_common.sh@641 -- # es=1 00:09:14.368 12:37:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:14.368 12:37:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:14.368 12:37:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:14.368 12:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.368 12:37:13 -- common/autotest_common.sh@10 -- # set +x 00:09:14.368 12:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.368 12:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.368 12:37:13 -- common/autotest_common.sh@10 -- # set +x 00:09:14.368 [2024-04-16 12:37:13.305349] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.368 12:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.368 12:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.368 12:37:13 -- common/autotest_common.sh@10 -- # set +x 00:09:14.368 12:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@54 -- # perf_pid=1117257 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@56 -- # delay=0 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@57 -- # kill -0 1117257 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:14.368 12:37:13 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:14.369 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.369 [2024-04-16 12:37:13.369202] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:14.933 12:37:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:14.933 12:37:13 -- target/delete_subsystem.sh@57 -- # kill -0 1117257 00:09:14.933 12:37:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:15.497 12:37:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:15.497 12:37:14 -- target/delete_subsystem.sh@57 -- # kill -0 1117257 00:09:15.497 12:37:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.062 12:37:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.062 12:37:14 -- target/delete_subsystem.sh@57 -- # kill -0 1117257 00:09:16.062 12:37:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.319 12:37:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.319 12:37:15 -- target/delete_subsystem.sh@57 -- # kill -0 1117257 00:09:16.319 12:37:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.884 12:37:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.884 12:37:15 -- target/delete_subsystem.sh@57 -- # kill -0 1117257 00:09:16.884 12:37:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.449 12:37:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.449 12:37:16 -- target/delete_subsystem.sh@57 -- # kill -0 1117257 00:09:17.449 12:37:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.449 Initializing NVMe Controllers 00:09:17.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:17.449 Controller IO queue size 128, less than required. 00:09:17.449 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:17.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:17.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:17.449 Initialization complete. Launching workers. 00:09:17.449 ======================================================== 00:09:17.449 Latency(us) 00:09:17.449 Device Information : IOPS MiB/s Average min max 00:09:17.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005796.91 1000214.48 1042975.36 00:09:17.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003955.54 1000242.02 1043104.74 00:09:17.449 ======================================================== 00:09:17.449 Total : 256.00 0.12 1004876.22 1000214.48 1043104.74 00:09:17.449 00:09:18.017 12:37:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:18.017 12:37:16 -- target/delete_subsystem.sh@57 -- # kill -0 1117257 00:09:18.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1117257) - No such process 00:09:18.017 12:37:16 -- target/delete_subsystem.sh@67 -- # wait 1117257 00:09:18.017 12:37:16 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:18.017 12:37:16 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:18.017 12:37:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:18.017 12:37:16 -- nvmf/common.sh@117 -- # sync 00:09:18.017 12:37:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.017 12:37:16 -- nvmf/common.sh@120 -- # set +e 00:09:18.017 12:37:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.017 12:37:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.017 rmmod nvme_tcp 00:09:18.017 rmmod nvme_fabrics 00:09:18.017 rmmod nvme_keyring 00:09:18.017 12:37:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.017 12:37:16 -- nvmf/common.sh@124 -- # set -e 00:09:18.017 12:37:16 -- nvmf/common.sh@125 -- # return 0 00:09:18.017 12:37:16 -- nvmf/common.sh@478 -- # '[' -n 1116707 ']' 00:09:18.017 12:37:16 -- nvmf/common.sh@479 -- # killprocess 1116707 00:09:18.017 12:37:16 -- common/autotest_common.sh@936 -- # '[' -z 1116707 ']' 00:09:18.017 12:37:16 -- common/autotest_common.sh@940 -- # kill -0 1116707 00:09:18.017 12:37:16 -- common/autotest_common.sh@941 -- # uname 00:09:18.017 12:37:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:18.017 12:37:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1116707 00:09:18.017 12:37:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:18.017 12:37:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:18.017 12:37:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1116707' 00:09:18.017 killing process with pid 1116707 00:09:18.017 12:37:16 -- common/autotest_common.sh@955 -- # kill 1116707 00:09:18.017 12:37:16 -- common/autotest_common.sh@960 -- # wait 1116707 00:09:18.276 12:37:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:18.276 12:37:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:18.276 12:37:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:18.276 12:37:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.276 12:37:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.276 12:37:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.276 12:37:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.276 12:37:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.812 12:37:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.812 00:09:20.812 real 0m12.959s 00:09:20.812 user 0m28.010s 00:09:20.812 sys 0m3.419s 00:09:20.812 12:37:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:20.812 12:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:20.812 ************************************ 00:09:20.812 END TEST nvmf_delete_subsystem 00:09:20.812 ************************************ 00:09:20.812 12:37:19 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:20.812 12:37:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:20.812 12:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.812 12:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:20.812 ************************************ 00:09:20.812 START TEST nvmf_ns_masking 00:09:20.812 ************************************ 00:09:20.812 12:37:19 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:20.812 * Looking for test storage... 00:09:20.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.812 12:37:19 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.812 12:37:19 -- nvmf/common.sh@7 -- # uname -s 00:09:20.812 12:37:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.812 12:37:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.812 12:37:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.812 12:37:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.812 12:37:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.812 12:37:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.812 12:37:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.812 12:37:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.812 12:37:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.812 12:37:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.812 12:37:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:20.812 12:37:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:20.812 12:37:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.812 12:37:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.812 12:37:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.812 12:37:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.812 12:37:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.812 12:37:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.812 12:37:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.812 12:37:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.812 12:37:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.813 12:37:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.813 12:37:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.813 12:37:19 -- paths/export.sh@5 -- # export PATH 00:09:20.813 12:37:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.813 12:37:19 -- nvmf/common.sh@47 -- # : 0 00:09:20.813 12:37:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.813 12:37:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.813 12:37:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.813 12:37:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.813 12:37:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.813 12:37:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.813 12:37:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.813 12:37:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.813 12:37:19 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.813 12:37:19 -- target/ns_masking.sh@11 -- # loops=5 00:09:20.813 12:37:19 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:20.813 12:37:19 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:20.813 12:37:19 -- target/ns_masking.sh@15 -- # uuidgen 00:09:20.813 12:37:19 -- target/ns_masking.sh@15 -- # HOSTID=658aec9c-85ff-4f5c-aa6f-3a157b8fdfa7 00:09:20.813 12:37:19 -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:20.813 12:37:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:20.813 12:37:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.813 12:37:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:20.813 12:37:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:20.813 12:37:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:20.813 12:37:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.813 12:37:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.813 12:37:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.813 12:37:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:20.813 12:37:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:20.813 12:37:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.813 12:37:19 -- common/autotest_common.sh@10 -- # set +x 00:09:23.347 12:37:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:23.347 12:37:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.347 12:37:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.347 12:37:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.347 12:37:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.347 12:37:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.347 12:37:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.347 12:37:21 -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.347 12:37:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.347 12:37:21 -- nvmf/common.sh@296 -- # e810=() 00:09:23.347 12:37:21 -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.347 12:37:21 -- nvmf/common.sh@297 -- # x722=() 00:09:23.347 12:37:21 -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.347 12:37:21 -- nvmf/common.sh@298 -- # mlx=() 00:09:23.347 12:37:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.347 12:37:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.347 12:37:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.347 12:37:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.347 12:37:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.348 12:37:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.348 12:37:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.348 12:37:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:23.348 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:23.348 12:37:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.348 12:37:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:23.348 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:23.348 12:37:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.348 12:37:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.348 12:37:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.348 12:37:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:23.348 12:37:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.348 12:37:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:23.348 Found net devices under 0000:82:00.0: cvl_0_0 00:09:23.348 12:37:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.348 12:37:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.348 12:37:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.348 12:37:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:23.348 12:37:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.348 12:37:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:23.348 Found net devices under 0000:82:00.1: cvl_0_1 00:09:23.348 12:37:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.348 12:37:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:23.348 12:37:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:23.348 12:37:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:23.348 12:37:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:23.348 12:37:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.348 12:37:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.348 12:37:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.348 12:37:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.348 12:37:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.348 12:37:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.348 12:37:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.348 12:37:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.348 12:37:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.348 12:37:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.348 12:37:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.348 12:37:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.348 12:37:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.348 12:37:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.348 12:37:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.348 12:37:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.348 12:37:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.348 12:37:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.348 12:37:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.348 12:37:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:09:23.348 00:09:23.348 --- 10.0.0.2 ping statistics --- 00:09:23.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.348 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:09:23.348 12:37:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:09:23.348 00:09:23.348 --- 10.0.0.1 ping statistics --- 00:09:23.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.348 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:23.348 12:37:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.348 12:37:22 -- nvmf/common.sh@411 -- # return 0 00:09:23.348 12:37:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:23.348 12:37:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.348 12:37:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:23.348 12:37:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:23.348 12:37:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.348 12:37:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:23.348 12:37:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:23.348 12:37:22 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:23.348 12:37:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:23.348 12:37:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:23.348 12:37:22 -- common/autotest_common.sh@10 -- # set +x 00:09:23.348 12:37:22 -- nvmf/common.sh@470 -- # nvmfpid=1119984 00:09:23.348 12:37:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:23.348 12:37:22 -- nvmf/common.sh@471 -- # waitforlisten 1119984 00:09:23.348 12:37:22 -- common/autotest_common.sh@817 -- # '[' -z 1119984 ']' 00:09:23.348 12:37:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.348 12:37:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:23.348 12:37:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.348 12:37:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:23.348 12:37:22 -- common/autotest_common.sh@10 -- # set +x 00:09:23.348 [2024-04-16 12:37:22.210989] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:09:23.348 [2024-04-16 12:37:22.211073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.348 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.348 [2024-04-16 12:37:22.286116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.348 [2024-04-16 12:37:22.393831] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.348 [2024-04-16 12:37:22.393895] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.348 [2024-04-16 12:37:22.393909] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.348 [2024-04-16 12:37:22.393921] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.348 [2024-04-16 12:37:22.393931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.348 [2024-04-16 12:37:22.393984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.348 [2024-04-16 12:37:22.394039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.348 [2024-04-16 12:37:22.394105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.348 [2024-04-16 12:37:22.394107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.606 12:37:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:23.606 12:37:22 -- common/autotest_common.sh@850 -- # return 0 00:09:23.606 12:37:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:23.606 12:37:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:23.606 12:37:22 -- common/autotest_common.sh@10 -- # set +x 00:09:23.606 12:37:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.606 12:37:22 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:23.863 [2024-04-16 12:37:22.821204] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.863 12:37:22 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:23.863 12:37:22 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:23.863 12:37:22 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:24.121 Malloc1 00:09:24.121 12:37:23 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:24.378 Malloc2 00:09:24.378 12:37:23 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.635 12:37:23 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:24.893 12:37:23 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.154 [2024-04-16 12:37:24.095380] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.154 12:37:24 -- target/ns_masking.sh@61 -- # connect 00:09:25.154 12:37:24 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 658aec9c-85ff-4f5c-aa6f-3a157b8fdfa7 -a 10.0.0.2 -s 4420 -i 4 00:09:25.444 12:37:24 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.444 12:37:24 -- common/autotest_common.sh@1184 -- # local i=0 00:09:25.444 12:37:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.444 12:37:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:25.444 12:37:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:27.341 12:37:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:27.341 12:37:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:27.341 12:37:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.341 12:37:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:27.341 12:37:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.341 12:37:26 -- common/autotest_common.sh@1194 -- # return 0 00:09:27.341 12:37:26 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:27.341 12:37:26 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:27.341 12:37:26 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:27.341 12:37:26 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:27.341 12:37:26 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:27.341 12:37:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:27.341 12:37:26 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:27.341 [ 0]:0x1 00:09:27.341 12:37:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:27.341 12:37:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:27.341 12:37:26 -- target/ns_masking.sh@40 -- # nguid=677c119065d347a291f618357d1453a3 00:09:27.341 12:37:26 -- target/ns_masking.sh@41 -- # [[ 677c119065d347a291f618357d1453a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.341 12:37:26 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:27.599 12:37:26 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:27.599 12:37:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:27.599 12:37:26 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:27.599 [ 0]:0x1 00:09:27.599 12:37:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:27.599 12:37:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:27.856 12:37:26 -- target/ns_masking.sh@40 -- # nguid=677c119065d347a291f618357d1453a3 00:09:27.856 12:37:26 -- target/ns_masking.sh@41 -- # [[ 677c119065d347a291f618357d1453a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.856 12:37:26 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:27.856 12:37:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:27.856 12:37:26 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:27.856 [ 1]:0x2 00:09:27.856 12:37:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:27.856 12:37:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:27.856 12:37:26 -- target/ns_masking.sh@40 -- # nguid=dba985f0c0714ef58bff0e868d1a3bdd 00:09:27.856 12:37:26 -- target/ns_masking.sh@41 -- # [[ dba985f0c0714ef58bff0e868d1a3bdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.856 12:37:26 -- target/ns_masking.sh@69 -- # disconnect 00:09:27.856 12:37:26 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.114 12:37:27 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.372 12:37:27 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:28.629 12:37:27 -- target/ns_masking.sh@77 -- # connect 1 00:09:28.629 12:37:27 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 658aec9c-85ff-4f5c-aa6f-3a157b8fdfa7 -a 10.0.0.2 -s 4420 -i 4 00:09:28.629 12:37:27 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:28.629 12:37:27 -- common/autotest_common.sh@1184 -- # local i=0 00:09:28.629 12:37:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.629 12:37:27 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:09:28.629 12:37:27 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:09:28.629 12:37:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:31.155 12:37:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:31.155 12:37:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:31.155 12:37:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.155 12:37:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:31.155 12:37:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.155 12:37:29 -- common/autotest_common.sh@1194 -- # return 0 00:09:31.155 12:37:29 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:31.155 12:37:29 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:31.155 12:37:29 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:31.155 12:37:29 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:31.155 12:37:29 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:31.155 12:37:29 -- common/autotest_common.sh@638 -- # local es=0 00:09:31.155 12:37:29 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:31.155 12:37:29 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:31.155 12:37:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:31.155 12:37:29 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:31.155 12:37:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:31.155 12:37:29 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:31.155 12:37:29 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:31.155 12:37:29 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:31.155 12:37:29 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:31.155 12:37:29 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:31.155 12:37:29 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:31.155 12:37:29 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.155 12:37:29 -- common/autotest_common.sh@641 -- # es=1 00:09:31.155 12:37:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:31.155 12:37:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:31.155 12:37:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:31.156 12:37:29 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:31.156 12:37:29 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:31.156 12:37:29 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:31.156 [ 0]:0x2 00:09:31.156 12:37:29 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:31.156 12:37:29 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:31.156 12:37:29 -- target/ns_masking.sh@40 -- # nguid=dba985f0c0714ef58bff0e868d1a3bdd 00:09:31.156 12:37:29 -- target/ns_masking.sh@41 -- # [[ dba985f0c0714ef58bff0e868d1a3bdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.156 12:37:29 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:31.156 12:37:30 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:31.156 12:37:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:31.156 12:37:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:31.156 [ 0]:0x1 00:09:31.156 12:37:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:31.156 12:37:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:31.156 12:37:30 -- target/ns_masking.sh@40 -- # nguid=677c119065d347a291f618357d1453a3 00:09:31.156 12:37:30 -- target/ns_masking.sh@41 -- # [[ 677c119065d347a291f618357d1453a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.156 12:37:30 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:31.156 12:37:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:31.156 12:37:30 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:31.156 [ 1]:0x2 00:09:31.156 12:37:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:31.156 12:37:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:31.156 12:37:30 -- target/ns_masking.sh@40 -- # nguid=dba985f0c0714ef58bff0e868d1a3bdd 00:09:31.156 12:37:30 -- target/ns_masking.sh@41 -- # [[ dba985f0c0714ef58bff0e868d1a3bdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.156 12:37:30 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:31.413 12:37:30 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:31.413 12:37:30 -- common/autotest_common.sh@638 -- # local es=0 00:09:31.413 12:37:30 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:31.413 12:37:30 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:31.413 12:37:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:31.413 12:37:30 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:31.413 12:37:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:31.413 12:37:30 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:31.413 12:37:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:31.413 12:37:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:31.671 12:37:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:31.671 12:37:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:31.671 12:37:30 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:31.671 12:37:30 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.671 12:37:30 -- common/autotest_common.sh@641 -- # es=1 00:09:31.671 12:37:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:31.671 12:37:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:31.671 12:37:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:31.671 12:37:30 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:31.671 12:37:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:31.671 12:37:30 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:31.671 [ 0]:0x2 00:09:31.671 12:37:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:31.671 12:37:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:31.671 12:37:30 -- target/ns_masking.sh@40 -- # nguid=dba985f0c0714ef58bff0e868d1a3bdd 00:09:31.671 12:37:30 -- target/ns_masking.sh@41 -- # [[ dba985f0c0714ef58bff0e868d1a3bdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:31.671 12:37:30 -- target/ns_masking.sh@91 -- # disconnect 00:09:31.671 12:37:30 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.671 12:37:30 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:31.929 12:37:30 -- target/ns_masking.sh@95 -- # connect 2 00:09:31.929 12:37:30 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 658aec9c-85ff-4f5c-aa6f-3a157b8fdfa7 -a 10.0.0.2 -s 4420 -i 4 00:09:32.186 12:37:31 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:32.186 12:37:31 -- common/autotest_common.sh@1184 -- # local i=0 00:09:32.186 12:37:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:32.186 12:37:31 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:32.186 12:37:31 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:32.186 12:37:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:34.084 12:37:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:34.084 12:37:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:34.084 12:37:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.342 12:37:33 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:34.342 12:37:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.342 12:37:33 -- common/autotest_common.sh@1194 -- # return 0 00:09:34.342 12:37:33 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:34.342 12:37:33 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:34.342 12:37:33 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:34.342 12:37:33 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:34.342 12:37:33 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:34.342 12:37:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:34.342 12:37:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:34.342 [ 0]:0x1 00:09:34.342 12:37:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:34.342 12:37:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:34.342 12:37:33 -- target/ns_masking.sh@40 -- # nguid=677c119065d347a291f618357d1453a3 00:09:34.342 12:37:33 -- target/ns_masking.sh@41 -- # [[ 677c119065d347a291f618357d1453a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.342 12:37:33 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:34.342 12:37:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:34.342 12:37:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:34.342 [ 1]:0x2 00:09:34.342 12:37:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:34.342 12:37:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:34.342 12:37:33 -- target/ns_masking.sh@40 -- # nguid=dba985f0c0714ef58bff0e868d1a3bdd 00:09:34.342 12:37:33 -- target/ns_masking.sh@41 -- # [[ dba985f0c0714ef58bff0e868d1a3bdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.342 12:37:33 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:34.600 12:37:33 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:34.600 12:37:33 -- common/autotest_common.sh@638 -- # local es=0 00:09:34.600 12:37:33 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:34.600 12:37:33 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:34.600 12:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.600 12:37:33 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:34.600 12:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.600 12:37:33 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:34.600 12:37:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:34.600 12:37:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:34.600 12:37:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:34.600 12:37:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:34.600 12:37:33 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:34.600 12:37:33 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.600 12:37:33 -- common/autotest_common.sh@641 -- # es=1 00:09:34.600 12:37:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:34.600 12:37:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:34.600 12:37:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:34.600 12:37:33 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:34.600 12:37:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:34.600 12:37:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:34.600 [ 0]:0x2 00:09:34.600 12:37:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:34.600 12:37:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:34.600 12:37:33 -- target/ns_masking.sh@40 -- # nguid=dba985f0c0714ef58bff0e868d1a3bdd 00:09:34.600 12:37:33 -- target/ns_masking.sh@41 -- # [[ dba985f0c0714ef58bff0e868d1a3bdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.600 12:37:33 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:34.600 12:37:33 -- common/autotest_common.sh@638 -- # local es=0 00:09:34.600 12:37:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:34.600 12:37:33 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.600 12:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.600 12:37:33 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.600 12:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.600 12:37:33 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.600 12:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.600 12:37:33 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.600 12:37:33 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:34.600 12:37:33 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:34.858 [2024-04-16 12:37:33.874501] nvmf_rpc.c:1770:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:34.858 request: 00:09:34.858 { 00:09:34.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.858 "nsid": 2, 00:09:34.858 "host": "nqn.2016-06.io.spdk:host1", 00:09:34.858 "method": "nvmf_ns_remove_host", 00:09:34.858 "req_id": 1 00:09:34.858 } 00:09:34.858 Got JSON-RPC error response 00:09:34.858 response: 00:09:34.858 { 00:09:34.858 "code": -32602, 00:09:34.858 "message": "Invalid parameters" 00:09:34.858 } 00:09:34.858 12:37:33 -- common/autotest_common.sh@641 -- # es=1 00:09:34.858 12:37:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:34.858 12:37:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:34.858 12:37:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:34.858 12:37:33 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:34.858 12:37:33 -- common/autotest_common.sh@638 -- # local es=0 00:09:34.858 12:37:33 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:34.858 12:37:33 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:34.858 12:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.858 12:37:33 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:34.858 12:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.858 12:37:33 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:34.858 12:37:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:34.858 12:37:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:34.858 12:37:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:34.858 12:37:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:35.116 12:37:33 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:35.116 12:37:33 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:35.116 12:37:33 -- common/autotest_common.sh@641 -- # es=1 00:09:35.116 12:37:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:35.116 12:37:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:35.116 12:37:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:35.116 12:37:33 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:35.116 12:37:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:35.116 12:37:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:35.116 [ 0]:0x2 00:09:35.116 12:37:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:35.116 12:37:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:35.116 12:37:33 -- target/ns_masking.sh@40 -- # nguid=dba985f0c0714ef58bff0e868d1a3bdd 00:09:35.116 12:37:33 -- target/ns_masking.sh@41 -- # [[ dba985f0c0714ef58bff0e868d1a3bdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:35.116 12:37:33 -- target/ns_masking.sh@108 -- # disconnect 00:09:35.116 12:37:33 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.116 12:37:34 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.373 12:37:34 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:35.373 12:37:34 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:35.373 12:37:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:35.374 12:37:34 -- nvmf/common.sh@117 -- # sync 00:09:35.374 12:37:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.374 12:37:34 -- nvmf/common.sh@120 -- # set +e 00:09:35.374 12:37:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.374 12:37:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.374 rmmod nvme_tcp 00:09:35.374 rmmod nvme_fabrics 00:09:35.374 rmmod nvme_keyring 00:09:35.374 12:37:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.374 12:37:34 -- nvmf/common.sh@124 -- # set -e 00:09:35.374 12:37:34 -- nvmf/common.sh@125 -- # return 0 00:09:35.374 12:37:34 -- nvmf/common.sh@478 -- # '[' -n 1119984 ']' 00:09:35.374 12:37:34 -- nvmf/common.sh@479 -- # killprocess 1119984 00:09:35.374 12:37:34 -- common/autotest_common.sh@936 -- # '[' -z 1119984 ']' 00:09:35.374 12:37:34 -- common/autotest_common.sh@940 -- # kill -0 1119984 00:09:35.374 12:37:34 -- common/autotest_common.sh@941 -- # uname 00:09:35.374 12:37:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:35.374 12:37:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1119984 00:09:35.374 12:37:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:35.374 12:37:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:35.374 12:37:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1119984' 00:09:35.374 killing process with pid 1119984 00:09:35.374 12:37:34 -- common/autotest_common.sh@955 -- # kill 1119984 00:09:35.374 12:37:34 -- common/autotest_common.sh@960 -- # wait 1119984 00:09:35.631 12:37:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:35.631 12:37:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:35.631 12:37:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:35.631 12:37:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.631 12:37:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.631 12:37:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.631 12:37:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.631 12:37:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.167 12:37:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.167 00:09:38.167 real 0m17.339s 00:09:38.167 user 0m52.299s 00:09:38.167 sys 0m4.242s 00:09:38.167 12:37:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:38.167 12:37:36 -- common/autotest_common.sh@10 -- # set +x 00:09:38.167 ************************************ 00:09:38.167 END TEST nvmf_ns_masking 00:09:38.167 ************************************ 00:09:38.167 12:37:36 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:38.167 12:37:36 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:38.167 12:37:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:38.167 12:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.167 12:37:36 -- common/autotest_common.sh@10 -- # set +x 00:09:38.167 ************************************ 00:09:38.167 START TEST nvmf_nvme_cli 00:09:38.167 ************************************ 00:09:38.167 12:37:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:38.167 * Looking for test storage... 00:09:38.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.167 12:37:36 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.167 12:37:36 -- nvmf/common.sh@7 -- # uname -s 00:09:38.167 12:37:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.167 12:37:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.167 12:37:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.167 12:37:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.167 12:37:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.167 12:37:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.167 12:37:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.167 12:37:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.167 12:37:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.167 12:37:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.167 12:37:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:38.167 12:37:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:38.167 12:37:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.167 12:37:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.167 12:37:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.167 12:37:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.167 12:37:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.167 12:37:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.167 12:37:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.167 12:37:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.168 12:37:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.168 12:37:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.168 12:37:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.168 12:37:36 -- paths/export.sh@5 -- # export PATH 00:09:38.168 12:37:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.168 12:37:36 -- nvmf/common.sh@47 -- # : 0 00:09:38.168 12:37:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.168 12:37:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.168 12:37:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.168 12:37:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.168 12:37:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.168 12:37:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.168 12:37:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.168 12:37:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.168 12:37:36 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.168 12:37:36 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.168 12:37:36 -- target/nvme_cli.sh@14 -- # devs=() 00:09:38.168 12:37:36 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:38.168 12:37:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:38.168 12:37:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.168 12:37:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:38.168 12:37:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:38.168 12:37:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:38.168 12:37:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.168 12:37:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.168 12:37:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.168 12:37:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:38.168 12:37:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:38.168 12:37:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.168 12:37:36 -- common/autotest_common.sh@10 -- # set +x 00:09:40.714 12:37:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:40.714 12:37:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.714 12:37:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.714 12:37:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.714 12:37:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.714 12:37:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.714 12:37:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.714 12:37:39 -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.714 12:37:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.714 12:37:39 -- nvmf/common.sh@296 -- # e810=() 00:09:40.714 12:37:39 -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.714 12:37:39 -- nvmf/common.sh@297 -- # x722=() 00:09:40.714 12:37:39 -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.714 12:37:39 -- nvmf/common.sh@298 -- # mlx=() 00:09:40.714 12:37:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.714 12:37:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.714 12:37:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.714 12:37:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.714 12:37:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.714 12:37:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.714 12:37:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:09:40.714 Found 0000:82:00.0 (0x8086 - 0x159b) 00:09:40.714 12:37:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.714 12:37:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:09:40.714 Found 0000:82:00.1 (0x8086 - 0x159b) 00:09:40.714 12:37:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.714 12:37:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.714 12:37:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.714 12:37:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:40.714 12:37:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.714 12:37:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:09:40.714 Found net devices under 0000:82:00.0: cvl_0_0 00:09:40.714 12:37:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.714 12:37:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.714 12:37:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.714 12:37:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:40.714 12:37:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.714 12:37:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:09:40.714 Found net devices under 0000:82:00.1: cvl_0_1 00:09:40.714 12:37:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.714 12:37:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:40.714 12:37:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:40.714 12:37:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:40.714 12:37:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.714 12:37:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.714 12:37:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.714 12:37:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.714 12:37:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.714 12:37:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.714 12:37:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.714 12:37:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.714 12:37:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.714 12:37:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.714 12:37:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.714 12:37:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.714 12:37:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.714 12:37:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.714 12:37:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.714 12:37:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.714 12:37:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.714 12:37:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.714 12:37:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.714 12:37:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:09:40.714 00:09:40.714 --- 10.0.0.2 ping statistics --- 00:09:40.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.714 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:09:40.714 12:37:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:40.714 00:09:40.714 --- 10.0.0.1 ping statistics --- 00:09:40.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.714 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:40.714 12:37:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.714 12:37:39 -- nvmf/common.sh@411 -- # return 0 00:09:40.714 12:37:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:40.714 12:37:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.714 12:37:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:40.714 12:37:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.714 12:37:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:40.714 12:37:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:40.714 12:37:39 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:40.714 12:37:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:40.714 12:37:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:40.714 12:37:39 -- common/autotest_common.sh@10 -- # set +x 00:09:40.714 12:37:39 -- nvmf/common.sh@470 -- # nvmfpid=1123888 00:09:40.714 12:37:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.714 12:37:39 -- nvmf/common.sh@471 -- # waitforlisten 1123888 00:09:40.715 12:37:39 -- common/autotest_common.sh@817 -- # '[' -z 1123888 ']' 00:09:40.715 12:37:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.715 12:37:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:40.715 12:37:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.715 12:37:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:40.715 12:37:39 -- common/autotest_common.sh@10 -- # set +x 00:09:40.715 [2024-04-16 12:37:39.646383] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:09:40.715 [2024-04-16 12:37:39.646464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.715 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.715 [2024-04-16 12:37:39.727428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.973 [2024-04-16 12:37:39.846147] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.973 [2024-04-16 12:37:39.846222] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.973 [2024-04-16 12:37:39.846239] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.973 [2024-04-16 12:37:39.846253] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.973 [2024-04-16 12:37:39.846265] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.973 [2024-04-16 12:37:39.846352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.973 [2024-04-16 12:37:39.846410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.973 [2024-04-16 12:37:39.846462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.973 [2024-04-16 12:37:39.846465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.538 12:37:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:41.538 12:37:40 -- common/autotest_common.sh@850 -- # return 0 00:09:41.538 12:37:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:41.538 12:37:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:41.538 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 12:37:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.796 12:37:40 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.796 12:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.796 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 [2024-04-16 12:37:40.616641] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.796 12:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.796 12:37:40 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.796 12:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.796 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 Malloc0 00:09:41.796 12:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.796 12:37:40 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:41.796 12:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.796 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 Malloc1 00:09:41.796 12:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.796 12:37:40 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:41.796 12:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.796 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 12:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.796 12:37:40 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.796 12:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.796 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 12:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.796 12:37:40 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.796 12:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.796 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 12:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.796 12:37:40 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.796 12:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.796 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 [2024-04-16 12:37:40.700922] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.796 12:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.796 12:37:40 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:41.796 12:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.796 12:37:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.796 12:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.796 12:37:40 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -a 10.0.0.2 -s 4420 00:09:41.796 00:09:41.796 Discovery Log Number of Records 2, Generation counter 2 00:09:41.796 =====Discovery Log Entry 0====== 00:09:41.796 trtype: tcp 00:09:41.796 adrfam: ipv4 00:09:41.796 subtype: current discovery subsystem 00:09:41.796 treq: not required 00:09:41.796 portid: 0 00:09:41.796 trsvcid: 4420 00:09:41.796 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:41.796 traddr: 10.0.0.2 00:09:41.796 eflags: explicit discovery connections, duplicate discovery information 00:09:41.796 sectype: none 00:09:41.796 =====Discovery Log Entry 1====== 00:09:41.796 trtype: tcp 00:09:41.796 adrfam: ipv4 00:09:41.796 subtype: nvme subsystem 00:09:41.796 treq: not required 00:09:41.796 portid: 0 00:09:41.796 trsvcid: 4420 00:09:41.796 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:41.796 traddr: 10.0.0.2 00:09:41.796 eflags: none 00:09:41.796 sectype: none 00:09:41.796 12:37:40 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:41.796 12:37:40 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:41.796 12:37:40 -- nvmf/common.sh@511 -- # local dev _ 00:09:41.796 12:37:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:41.797 12:37:40 -- nvmf/common.sh@510 -- # nvme list 00:09:41.797 12:37:40 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:41.797 12:37:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:41.797 12:37:40 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:41.797 12:37:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:41.797 12:37:40 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:41.797 12:37:40 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.730 12:37:41 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:42.730 12:37:41 -- common/autotest_common.sh@1184 -- # local i=0 00:09:42.730 12:37:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.730 12:37:41 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:42.730 12:37:41 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:42.730 12:37:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:44.626 12:37:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:44.626 12:37:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:44.626 12:37:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.626 12:37:43 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:44.626 12:37:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.626 12:37:43 -- common/autotest_common.sh@1194 -- # return 0 00:09:44.626 12:37:43 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:44.626 12:37:43 -- nvmf/common.sh@511 -- # local dev _ 00:09:44.626 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.626 12:37:43 -- nvmf/common.sh@510 -- # nvme list 00:09:44.626 12:37:43 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:44.626 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.626 12:37:43 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:44.626 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.626 12:37:43 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:44.626 12:37:43 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:44.627 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.627 12:37:43 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:44.627 12:37:43 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:44.627 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.627 12:37:43 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:44.627 /dev/nvme0n1 ]] 00:09:44.627 12:37:43 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:44.627 12:37:43 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:44.627 12:37:43 -- nvmf/common.sh@511 -- # local dev _ 00:09:44.627 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.627 12:37:43 -- nvmf/common.sh@510 -- # nvme list 00:09:44.884 12:37:43 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:44.884 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.884 12:37:43 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:44.884 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.884 12:37:43 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:44.884 12:37:43 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:44.884 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.884 12:37:43 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:44.884 12:37:43 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:44.884 12:37:43 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:44.884 12:37:43 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:44.884 12:37:43 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.142 12:37:43 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.142 12:37:43 -- common/autotest_common.sh@1205 -- # local i=0 00:09:45.142 12:37:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:45.142 12:37:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.142 12:37:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:45.142 12:37:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.142 12:37:44 -- common/autotest_common.sh@1217 -- # return 0 00:09:45.142 12:37:44 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:45.142 12:37:44 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.142 12:37:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:45.142 12:37:44 -- common/autotest_common.sh@10 -- # set +x 00:09:45.142 12:37:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:45.142 12:37:44 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:45.142 12:37:44 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:45.142 12:37:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:45.142 12:37:44 -- nvmf/common.sh@117 -- # sync 00:09:45.142 12:37:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.142 12:37:44 -- nvmf/common.sh@120 -- # set +e 00:09:45.142 12:37:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.142 12:37:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.142 rmmod nvme_tcp 00:09:45.142 rmmod nvme_fabrics 00:09:45.142 rmmod nvme_keyring 00:09:45.142 12:37:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.142 12:37:44 -- nvmf/common.sh@124 -- # set -e 00:09:45.142 12:37:44 -- nvmf/common.sh@125 -- # return 0 00:09:45.142 12:37:44 -- nvmf/common.sh@478 -- # '[' -n 1123888 ']' 00:09:45.142 12:37:44 -- nvmf/common.sh@479 -- # killprocess 1123888 00:09:45.142 12:37:44 -- common/autotest_common.sh@936 -- # '[' -z 1123888 ']' 00:09:45.142 12:37:44 -- common/autotest_common.sh@940 -- # kill -0 1123888 00:09:45.142 12:37:44 -- common/autotest_common.sh@941 -- # uname 00:09:45.142 12:37:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:45.142 12:37:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1123888 00:09:45.142 12:37:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:45.142 12:37:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:45.142 12:37:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1123888' 00:09:45.142 killing process with pid 1123888 00:09:45.142 12:37:44 -- common/autotest_common.sh@955 -- # kill 1123888 00:09:45.142 12:37:44 -- common/autotest_common.sh@960 -- # wait 1123888 00:09:45.402 12:37:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:45.402 12:37:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:45.402 12:37:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:45.402 12:37:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.402 12:37:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.402 12:37:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.402 12:37:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.402 12:37:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.937 12:37:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.937 00:09:47.937 real 0m9.581s 00:09:47.937 user 0m18.788s 00:09:47.937 sys 0m2.699s 00:09:47.937 12:37:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:47.937 12:37:46 -- common/autotest_common.sh@10 -- # set +x 00:09:47.937 ************************************ 00:09:47.937 END TEST nvmf_nvme_cli 00:09:47.937 ************************************ 00:09:47.937 12:37:46 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:47.937 12:37:46 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:47.937 12:37:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:47.937 12:37:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:47.937 12:37:46 -- common/autotest_common.sh@10 -- # set +x 00:09:47.937 ************************************ 00:09:47.937 START TEST nvmf_vfio_user 00:09:47.937 ************************************ 00:09:47.937 12:37:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:47.937 * Looking for test storage... 00:09:47.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.937 12:37:46 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.937 12:37:46 -- nvmf/common.sh@7 -- # uname -s 00:09:47.937 12:37:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.937 12:37:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.937 12:37:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.937 12:37:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.937 12:37:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.937 12:37:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.937 12:37:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.937 12:37:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.937 12:37:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.937 12:37:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.938 12:37:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:47.938 12:37:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:47.938 12:37:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.938 12:37:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.938 12:37:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.938 12:37:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.938 12:37:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.938 12:37:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.938 12:37:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.938 12:37:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.938 12:37:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.938 12:37:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.938 12:37:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.938 12:37:46 -- paths/export.sh@5 -- # export PATH 00:09:47.938 12:37:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.938 12:37:46 -- nvmf/common.sh@47 -- # : 0 00:09:47.938 12:37:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.938 12:37:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.938 12:37:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.938 12:37:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.938 12:37:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.938 12:37:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.938 12:37:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.938 12:37:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1124830 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1124830' 00:09:47.938 Process pid: 1124830 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:47.938 12:37:46 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1124830 00:09:47.938 12:37:46 -- common/autotest_common.sh@817 -- # '[' -z 1124830 ']' 00:09:47.938 12:37:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.938 12:37:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:47.938 12:37:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.938 12:37:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:47.938 12:37:46 -- common/autotest_common.sh@10 -- # set +x 00:09:47.938 [2024-04-16 12:37:46.697847] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:09:47.938 [2024-04-16 12:37:46.697927] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.938 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.938 [2024-04-16 12:37:46.764297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.938 [2024-04-16 12:37:46.871368] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.938 [2024-04-16 12:37:46.871421] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.938 [2024-04-16 12:37:46.871451] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.938 [2024-04-16 12:37:46.871464] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.938 [2024-04-16 12:37:46.871474] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.938 [2024-04-16 12:37:46.871540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.938 [2024-04-16 12:37:46.871600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.938 [2024-04-16 12:37:46.871666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.938 [2024-04-16 12:37:46.871670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.688 12:37:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:48.688 12:37:47 -- common/autotest_common.sh@850 -- # return 0 00:09:48.688 12:37:47 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:49.620 12:37:48 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:49.879 12:37:48 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:49.879 12:37:48 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:49.879 12:37:48 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:49.879 12:37:48 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:49.879 12:37:48 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:50.137 Malloc1 00:09:50.394 12:37:49 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:50.651 12:37:49 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:50.909 12:37:49 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:51.166 12:37:49 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:51.166 12:37:49 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:51.166 12:37:49 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:51.424 Malloc2 00:09:51.424 12:37:50 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:51.681 12:37:50 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:51.938 12:37:50 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:52.198 12:37:51 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:52.198 12:37:51 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:52.198 12:37:51 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:52.198 12:37:51 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:52.198 12:37:51 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:52.198 12:37:51 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:52.198 [2024-04-16 12:37:51.068246] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:09:52.198 [2024-04-16 12:37:51.068285] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125382 ] 00:09:52.198 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.198 [2024-04-16 12:37:51.100807] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:52.198 [2024-04-16 12:37:51.110010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.198 [2024-04-16 12:37:51.110038] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc2c878f000 00:09:52.198 [2024-04-16 12:37:51.111005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.198 [2024-04-16 12:37:51.111999] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.198 [2024-04-16 12:37:51.113006] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.198 [2024-04-16 12:37:51.114013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.198 [2024-04-16 12:37:51.115014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.198 [2024-04-16 12:37:51.116019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.198 [2024-04-16 12:37:51.117022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:52.198 [2024-04-16 12:37:51.118029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.198 [2024-04-16 12:37:51.119038] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.198 [2024-04-16 12:37:51.119061] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc2c8784000 00:09:52.198 [2024-04-16 12:37:51.120201] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.198 [2024-04-16 12:37:51.139968] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:52.198 [2024-04-16 12:37:51.140006] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:52.198 [2024-04-16 12:37:51.142179] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:52.198 [2024-04-16 12:37:51.142232] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:52.198 [2024-04-16 12:37:51.142319] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:52.198 [2024-04-16 12:37:51.142347] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:52.198 [2024-04-16 12:37:51.142358] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:52.198 [2024-04-16 12:37:51.143174] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:52.198 [2024-04-16 12:37:51.143193] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:52.198 [2024-04-16 12:37:51.143206] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:52.198 [2024-04-16 12:37:51.144178] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:52.199 [2024-04-16 12:37:51.144198] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:52.199 [2024-04-16 12:37:51.144211] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:52.199 [2024-04-16 12:37:51.145181] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:52.199 [2024-04-16 12:37:51.145199] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:52.199 [2024-04-16 12:37:51.146186] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:52.199 [2024-04-16 12:37:51.146205] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:52.199 [2024-04-16 12:37:51.146214] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:52.199 [2024-04-16 12:37:51.146225] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:52.199 [2024-04-16 12:37:51.146339] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:52.199 [2024-04-16 12:37:51.146347] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:52.199 [2024-04-16 12:37:51.146356] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:52.199 [2024-04-16 12:37:51.147194] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:52.199 [2024-04-16 12:37:51.148203] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:52.199 [2024-04-16 12:37:51.149205] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:52.199 [2024-04-16 12:37:51.150200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:52.199 [2024-04-16 12:37:51.150325] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:52.199 [2024-04-16 12:37:51.151221] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:52.199 [2024-04-16 12:37:51.151239] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:52.199 [2024-04-16 12:37:51.151248] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151273] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:52.199 [2024-04-16 12:37:51.151287] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151312] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.199 [2024-04-16 12:37:51.151321] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.199 [2024-04-16 12:37:51.151339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.199 [2024-04-16 12:37:51.151404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:52.199 [2024-04-16 12:37:51.151419] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:52.199 [2024-04-16 12:37:51.151427] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:52.199 [2024-04-16 12:37:51.151435] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:52.199 [2024-04-16 12:37:51.151443] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:52.199 [2024-04-16 12:37:51.151451] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:52.199 [2024-04-16 12:37:51.151459] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:52.199 [2024-04-16 12:37:51.151467] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151479] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:52.199 [2024-04-16 12:37:51.151516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:52.199 [2024-04-16 12:37:51.151537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.199 [2024-04-16 12:37:51.151574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.199 [2024-04-16 12:37:51.151589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.199 [2024-04-16 12:37:51.151601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:52.199 [2024-04-16 12:37:51.151611] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:52.199 [2024-04-16 12:37:51.151656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:52.199 [2024-04-16 12:37:51.151666] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:52.199 [2024-04-16 12:37:51.151675] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151690] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151701] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.199 [2024-04-16 12:37:51.151728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:52.199 [2024-04-16 12:37:51.151781] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151797] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151810] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:52.199 [2024-04-16 12:37:51.151819] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:52.199 [2024-04-16 12:37:51.151829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:52.199 [2024-04-16 12:37:51.151843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:52.199 [2024-04-16 12:37:51.151860] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:52.199 [2024-04-16 12:37:51.151894] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151909] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151924] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.199 [2024-04-16 12:37:51.151933] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.199 [2024-04-16 12:37:51.151942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.199 [2024-04-16 12:37:51.151960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:52.199 [2024-04-16 12:37:51.151981] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.151995] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.152007] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:52.199 [2024-04-16 12:37:51.152015] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.199 [2024-04-16 12:37:51.152025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.199 [2024-04-16 12:37:51.152036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:52.199 [2024-04-16 12:37:51.152050] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.152061] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.152074] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.152084] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.152093] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.152101] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:52.199 [2024-04-16 12:37:51.152108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:52.199 [2024-04-16 12:37:51.152117] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:52.199 [2024-04-16 12:37:51.152141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:52.200 [2024-04-16 12:37:51.152159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:52.200 [2024-04-16 12:37:51.152178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:52.200 [2024-04-16 12:37:51.152193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:52.200 [2024-04-16 12:37:51.152209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:52.200 [2024-04-16 12:37:51.152223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:52.200 [2024-04-16 12:37:51.152238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:52.200 [2024-04-16 12:37:51.152250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:52.200 [2024-04-16 12:37:51.152272] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:52.200 [2024-04-16 12:37:51.152282] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:52.200 [2024-04-16 12:37:51.152288] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:52.200 [2024-04-16 12:37:51.152294] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:52.200 [2024-04-16 12:37:51.152303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:52.200 [2024-04-16 12:37:51.152314] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:52.200 [2024-04-16 12:37:51.152322] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:52.200 [2024-04-16 12:37:51.152331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:52.200 [2024-04-16 12:37:51.152342] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:52.200 [2024-04-16 12:37:51.152350] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:52.200 [2024-04-16 12:37:51.152358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:52.200 [2024-04-16 12:37:51.152370] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:52.200 [2024-04-16 12:37:51.152379] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:52.200 [2024-04-16 12:37:51.152387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:52.200 [2024-04-16 12:37:51.152398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:52.200 [2024-04-16 12:37:51.152419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:52.200 [2024-04-16 12:37:51.152434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:52.200 [2024-04-16 12:37:51.152446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:52.200 ===================================================== 00:09:52.200 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:52.200 ===================================================== 00:09:52.200 Controller Capabilities/Features 00:09:52.200 ================================ 00:09:52.200 Vendor ID: 4e58 00:09:52.200 Subsystem Vendor ID: 4e58 00:09:52.200 Serial Number: SPDK1 00:09:52.200 Model Number: SPDK bdev Controller 00:09:52.200 Firmware Version: 24.05 00:09:52.200 Recommended Arb Burst: 6 00:09:52.200 IEEE OUI Identifier: 8d 6b 50 00:09:52.200 Multi-path I/O 00:09:52.200 May have multiple subsystem ports: Yes 00:09:52.200 May have multiple controllers: Yes 00:09:52.200 Associated with SR-IOV VF: No 00:09:52.200 Max Data Transfer Size: 131072 00:09:52.200 Max Number of Namespaces: 32 00:09:52.200 Max Number of I/O Queues: 127 00:09:52.200 NVMe Specification Version (VS): 1.3 00:09:52.200 NVMe Specification Version (Identify): 1.3 00:09:52.200 Maximum Queue Entries: 256 00:09:52.200 Contiguous Queues Required: Yes 00:09:52.200 Arbitration Mechanisms Supported 00:09:52.200 Weighted Round Robin: Not Supported 00:09:52.200 Vendor Specific: Not Supported 00:09:52.200 Reset Timeout: 15000 ms 00:09:52.200 Doorbell Stride: 4 bytes 00:09:52.200 NVM Subsystem Reset: Not Supported 00:09:52.200 Command Sets Supported 00:09:52.200 NVM Command Set: Supported 00:09:52.200 Boot Partition: Not Supported 00:09:52.200 Memory Page Size Minimum: 4096 bytes 00:09:52.200 Memory Page Size Maximum: 4096 bytes 00:09:52.200 Persistent Memory Region: Not Supported 00:09:52.200 Optional Asynchronous Events Supported 00:09:52.200 Namespace Attribute Notices: Supported 00:09:52.200 Firmware Activation Notices: Not Supported 00:09:52.200 ANA Change Notices: Not Supported 00:09:52.200 PLE Aggregate Log Change Notices: Not Supported 00:09:52.200 LBA Status Info Alert Notices: Not Supported 00:09:52.200 EGE Aggregate Log Change Notices: Not Supported 00:09:52.200 Normal NVM Subsystem Shutdown event: Not Supported 00:09:52.200 Zone Descriptor Change Notices: Not Supported 00:09:52.200 Discovery Log Change Notices: Not Supported 00:09:52.200 Controller Attributes 00:09:52.200 128-bit Host Identifier: Supported 00:09:52.200 Non-Operational Permissive Mode: Not Supported 00:09:52.200 NVM Sets: Not Supported 00:09:52.200 Read Recovery Levels: Not Supported 00:09:52.200 Endurance Groups: Not Supported 00:09:52.200 Predictable Latency Mode: Not Supported 00:09:52.200 Traffic Based Keep ALive: Not Supported 00:09:52.200 Namespace Granularity: Not Supported 00:09:52.200 SQ Associations: Not Supported 00:09:52.200 UUID List: Not Supported 00:09:52.200 Multi-Domain Subsystem: Not Supported 00:09:52.200 Fixed Capacity Management: Not Supported 00:09:52.200 Variable Capacity Management: Not Supported 00:09:52.200 Delete Endurance Group: Not Supported 00:09:52.200 Delete NVM Set: Not Supported 00:09:52.200 Extended LBA Formats Supported: Not Supported 00:09:52.200 Flexible Data Placement Supported: Not Supported 00:09:52.200 00:09:52.200 Controller Memory Buffer Support 00:09:52.200 ================================ 00:09:52.200 Supported: No 00:09:52.200 00:09:52.200 Persistent Memory Region Support 00:09:52.200 ================================ 00:09:52.200 Supported: No 00:09:52.200 00:09:52.200 Admin Command Set Attributes 00:09:52.200 ============================ 00:09:52.200 Security Send/Receive: Not Supported 00:09:52.200 Format NVM: Not Supported 00:09:52.200 Firmware Activate/Download: Not Supported 00:09:52.200 Namespace Management: Not Supported 00:09:52.200 Device Self-Test: Not Supported 00:09:52.200 Directives: Not Supported 00:09:52.200 NVMe-MI: Not Supported 00:09:52.200 Virtualization Management: Not Supported 00:09:52.200 Doorbell Buffer Config: Not Supported 00:09:52.200 Get LBA Status Capability: Not Supported 00:09:52.200 Command & Feature Lockdown Capability: Not Supported 00:09:52.200 Abort Command Limit: 4 00:09:52.200 Async Event Request Limit: 4 00:09:52.200 Number of Firmware Slots: N/A 00:09:52.200 Firmware Slot 1 Read-Only: N/A 00:09:52.200 Firmware Activation Without Reset: N/A 00:09:52.200 Multiple Update Detection Support: N/A 00:09:52.200 Firmware Update Granularity: No Information Provided 00:09:52.200 Per-Namespace SMART Log: No 00:09:52.200 Asymmetric Namespace Access Log Page: Not Supported 00:09:52.200 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:52.200 Command Effects Log Page: Supported 00:09:52.200 Get Log Page Extended Data: Supported 00:09:52.200 Telemetry Log Pages: Not Supported 00:09:52.200 Persistent Event Log Pages: Not Supported 00:09:52.200 Supported Log Pages Log Page: May Support 00:09:52.200 Commands Supported & Effects Log Page: Not Supported 00:09:52.200 Feature Identifiers & Effects Log Page:May Support 00:09:52.200 NVMe-MI Commands & Effects Log Page: May Support 00:09:52.200 Data Area 4 for Telemetry Log: Not Supported 00:09:52.200 Error Log Page Entries Supported: 128 00:09:52.200 Keep Alive: Supported 00:09:52.200 Keep Alive Granularity: 10000 ms 00:09:52.200 00:09:52.200 NVM Command Set Attributes 00:09:52.200 ========================== 00:09:52.200 Submission Queue Entry Size 00:09:52.200 Max: 64 00:09:52.200 Min: 64 00:09:52.200 Completion Queue Entry Size 00:09:52.200 Max: 16 00:09:52.200 Min: 16 00:09:52.200 Number of Namespaces: 32 00:09:52.200 Compare Command: Supported 00:09:52.200 Write Uncorrectable Command: Not Supported 00:09:52.200 Dataset Management Command: Supported 00:09:52.200 Write Zeroes Command: Supported 00:09:52.200 Set Features Save Field: Not Supported 00:09:52.200 Reservations: Not Supported 00:09:52.200 Timestamp: Not Supported 00:09:52.200 Copy: Supported 00:09:52.200 Volatile Write Cache: Present 00:09:52.200 Atomic Write Unit (Normal): 1 00:09:52.200 Atomic Write Unit (PFail): 1 00:09:52.200 Atomic Compare & Write Unit: 1 00:09:52.200 Fused Compare & Write: Supported 00:09:52.200 Scatter-Gather List 00:09:52.200 SGL Command Set: Supported (Dword aligned) 00:09:52.200 SGL Keyed: Not Supported 00:09:52.200 SGL Bit Bucket Descriptor: Not Supported 00:09:52.200 SGL Metadata Pointer: Not Supported 00:09:52.201 Oversized SGL: Not Supported 00:09:52.201 SGL Metadata Address: Not Supported 00:09:52.201 SGL Offset: Not Supported 00:09:52.201 Transport SGL Data Block: Not Supported 00:09:52.201 Replay Protected Memory Block: Not Supported 00:09:52.201 00:09:52.201 Firmware Slot Information 00:09:52.201 ========================= 00:09:52.201 Active slot: 1 00:09:52.201 Slot 1 Firmware Revision: 24.05 00:09:52.201 00:09:52.201 00:09:52.201 Commands Supported and Effects 00:09:52.201 ============================== 00:09:52.201 Admin Commands 00:09:52.201 -------------- 00:09:52.201 Get Log Page (02h): Supported 00:09:52.201 Identify (06h): Supported 00:09:52.201 Abort (08h): Supported 00:09:52.201 Set Features (09h): Supported 00:09:52.201 Get Features (0Ah): Supported 00:09:52.201 Asynchronous Event Request (0Ch): Supported 00:09:52.201 Keep Alive (18h): Supported 00:09:52.201 I/O Commands 00:09:52.201 ------------ 00:09:52.201 Flush (00h): Supported LBA-Change 00:09:52.201 Write (01h): Supported LBA-Change 00:09:52.201 Read (02h): Supported 00:09:52.201 Compare (05h): Supported 00:09:52.201 Write Zeroes (08h): Supported LBA-Change 00:09:52.201 Dataset Management (09h): Supported LBA-Change 00:09:52.201 Copy (19h): Supported LBA-Change 00:09:52.201 Unknown (79h): Supported LBA-Change 00:09:52.201 Unknown (7Ah): Supported 00:09:52.201 00:09:52.201 Error Log 00:09:52.201 ========= 00:09:52.201 00:09:52.201 Arbitration 00:09:52.201 =========== 00:09:52.201 Arbitration Burst: 1 00:09:52.201 00:09:52.201 Power Management 00:09:52.201 ================ 00:09:52.201 Number of Power States: 1 00:09:52.201 Current Power State: Power State #0 00:09:52.201 Power State #0: 00:09:52.201 Max Power: 0.00 W 00:09:52.201 Non-Operational State: Operational 00:09:52.201 Entry Latency: Not Reported 00:09:52.201 Exit Latency: Not Reported 00:09:52.201 Relative Read Throughput: 0 00:09:52.201 Relative Read Latency: 0 00:09:52.201 Relative Write Throughput: 0 00:09:52.201 Relative Write Latency: 0 00:09:52.201 Idle Power: Not Reported 00:09:52.201 Active Power: Not Reported 00:09:52.201 Non-Operational Permissive Mode: Not Supported 00:09:52.201 00:09:52.201 Health Information 00:09:52.201 ================== 00:09:52.201 Critical Warnings: 00:09:52.201 Available Spare Space: OK 00:09:52.201 Temperature: OK 00:09:52.201 Device Reliability: OK 00:09:52.201 Read Only: No 00:09:52.201 Volatile Memory Backup: OK 00:09:52.201 Current Temperature: 0 Kelvin (-2[2024-04-16 12:37:51.152598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:52.201 [2024-04-16 12:37:51.152631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:52.201 [2024-04-16 12:37:51.152671] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:52.201 [2024-04-16 12:37:51.152688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.201 [2024-04-16 12:37:51.152699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.201 [2024-04-16 12:37:51.152710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.201 [2024-04-16 12:37:51.152720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:52.201 [2024-04-16 12:37:51.153231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:52.201 [2024-04-16 12:37:51.153251] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:52.201 [2024-04-16 12:37:51.154226] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.201 [2024-04-16 12:37:51.154311] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:52.201 [2024-04-16 12:37:51.154326] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:52.201 [2024-04-16 12:37:51.155240] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:52.201 [2024-04-16 12:37:51.155262] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:52.201 [2024-04-16 12:37:51.155315] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:52.201 [2024-04-16 12:37:51.159575] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:52.201 73 Celsius) 00:09:52.201 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:52.201 Available Spare: 0% 00:09:52.201 Available Spare Threshold: 0% 00:09:52.201 Life Percentage Used: 0% 00:09:52.201 Data Units Read: 0 00:09:52.201 Data Units Written: 0 00:09:52.201 Host Read Commands: 0 00:09:52.201 Host Write Commands: 0 00:09:52.201 Controller Busy Time: 0 minutes 00:09:52.201 Power Cycles: 0 00:09:52.201 Power On Hours: 0 hours 00:09:52.201 Unsafe Shutdowns: 0 00:09:52.201 Unrecoverable Media Errors: 0 00:09:52.201 Lifetime Error Log Entries: 0 00:09:52.201 Warning Temperature Time: 0 minutes 00:09:52.201 Critical Temperature Time: 0 minutes 00:09:52.201 00:09:52.201 Number of Queues 00:09:52.201 ================ 00:09:52.201 Number of I/O Submission Queues: 127 00:09:52.201 Number of I/O Completion Queues: 127 00:09:52.201 00:09:52.201 Active Namespaces 00:09:52.201 ================= 00:09:52.201 Namespace ID:1 00:09:52.201 Error Recovery Timeout: Unlimited 00:09:52.201 Command Set Identifier: NVM (00h) 00:09:52.201 Deallocate: Supported 00:09:52.201 Deallocated/Unwritten Error: Not Supported 00:09:52.201 Deallocated Read Value: Unknown 00:09:52.201 Deallocate in Write Zeroes: Not Supported 00:09:52.201 Deallocated Guard Field: 0xFFFF 00:09:52.201 Flush: Supported 00:09:52.201 Reservation: Supported 00:09:52.201 Namespace Sharing Capabilities: Multiple Controllers 00:09:52.201 Size (in LBAs): 131072 (0GiB) 00:09:52.201 Capacity (in LBAs): 131072 (0GiB) 00:09:52.201 Utilization (in LBAs): 131072 (0GiB) 00:09:52.201 NGUID: 546F23316BF1433B8D9C7333C741F1C9 00:09:52.201 UUID: 546f2331-6bf1-433b-8d9c-7333c741f1c9 00:09:52.201 Thin Provisioning: Not Supported 00:09:52.201 Per-NS Atomic Units: Yes 00:09:52.201 Atomic Boundary Size (Normal): 0 00:09:52.201 Atomic Boundary Size (PFail): 0 00:09:52.201 Atomic Boundary Offset: 0 00:09:52.201 Maximum Single Source Range Length: 65535 00:09:52.201 Maximum Copy Length: 65535 00:09:52.201 Maximum Source Range Count: 1 00:09:52.201 NGUID/EUI64 Never Reused: No 00:09:52.201 Namespace Write Protected: No 00:09:52.201 Number of LBA Formats: 1 00:09:52.201 Current LBA Format: LBA Format #00 00:09:52.201 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:52.201 00:09:52.201 12:37:51 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:52.201 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.459 [2024-04-16 12:37:51.389389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:57.724 [2024-04-16 12:37:56.408807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:57.724 Initializing NVMe Controllers 00:09:57.724 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:57.724 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:57.724 Initialization complete. Launching workers. 00:09:57.724 ======================================================== 00:09:57.724 Latency(us) 00:09:57.724 Device Information : IOPS MiB/s Average min max 00:09:57.724 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32761.00 127.97 3906.83 1215.87 9637.85 00:09:57.724 ======================================================== 00:09:57.724 Total : 32761.00 127.97 3906.83 1215.87 9637.85 00:09:57.724 00:09:57.725 12:37:56 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:57.725 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.725 [2024-04-16 12:37:56.644940] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:02.989 [2024-04-16 12:38:01.677777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:02.989 Initializing NVMe Controllers 00:10:02.989 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:02.989 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:02.989 Initialization complete. Launching workers. 00:10:02.989 ======================================================== 00:10:02.989 Latency(us) 00:10:02.989 Device Information : IOPS MiB/s Average min max 00:10:02.989 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15968.10 62.38 8015.24 6961.14 15999.59 00:10:02.989 ======================================================== 00:10:02.989 Total : 15968.10 62.38 8015.24 6961.14 15999.59 00:10:02.989 00:10:02.989 12:38:01 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:02.989 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.989 [2024-04-16 12:38:01.896813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:08.302 [2024-04-16 12:38:06.970946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:08.302 Initializing NVMe Controllers 00:10:08.302 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:08.302 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:08.302 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:08.302 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:08.302 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:08.302 Initialization complete. Launching workers. 00:10:08.302 Starting thread on core 2 00:10:08.302 Starting thread on core 3 00:10:08.302 Starting thread on core 1 00:10:08.302 12:38:07 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:08.302 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.302 [2024-04-16 12:38:07.283048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.488 [2024-04-16 12:38:11.297147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.488 Initializing NVMe Controllers 00:10:12.488 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.488 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:12.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:12.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:12.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:12.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:12.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:12.488 Initialization complete. Launching workers. 00:10:12.488 Starting thread on core 1 with urgent priority queue 00:10:12.488 Starting thread on core 2 with urgent priority queue 00:10:12.488 Starting thread on core 3 with urgent priority queue 00:10:12.488 Starting thread on core 0 with urgent priority queue 00:10:12.488 SPDK bdev Controller (SPDK1 ) core 0: 3214.33 IO/s 31.11 secs/100000 ios 00:10:12.488 SPDK bdev Controller (SPDK1 ) core 1: 3239.00 IO/s 30.87 secs/100000 ios 00:10:12.488 SPDK bdev Controller (SPDK1 ) core 2: 2779.67 IO/s 35.98 secs/100000 ios 00:10:12.488 SPDK bdev Controller (SPDK1 ) core 3: 2836.00 IO/s 35.26 secs/100000 ios 00:10:12.488 ======================================================== 00:10:12.488 00:10:12.488 12:38:11 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.488 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.745 [2024-04-16 12:38:11.596134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.745 [2024-04-16 12:38:11.631728] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.745 Initializing NVMe Controllers 00:10:12.745 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.745 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.745 Namespace ID: 1 size: 0GB 00:10:12.745 Initialization complete. 00:10:12.745 INFO: using host memory buffer for IO 00:10:12.745 Hello world! 00:10:12.745 12:38:11 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.745 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.003 [2024-04-16 12:38:11.934035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:13.936 Initializing NVMe Controllers 00:10:13.936 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:13.936 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:13.936 Initialization complete. Launching workers. 00:10:13.936 submit (in ns) avg, min, max = 8502.5, 3486.7, 4018350.0 00:10:13.936 complete (in ns) avg, min, max = 23715.2, 2038.9, 4015587.8 00:10:13.936 00:10:13.936 Submit histogram 00:10:13.936 ================ 00:10:13.936 Range in us Cumulative Count 00:10:13.936 3.484 - 3.508: 0.3936% ( 52) 00:10:13.936 3.508 - 3.532: 1.3623% ( 128) 00:10:13.936 3.532 - 3.556: 4.3064% ( 389) 00:10:13.937 3.556 - 3.579: 9.6647% ( 708) 00:10:13.937 3.579 - 3.603: 19.5565% ( 1307) 00:10:13.937 3.603 - 3.627: 28.7974% ( 1221) 00:10:13.937 3.627 - 3.650: 38.1291% ( 1233) 00:10:13.937 3.650 - 3.674: 45.1979% ( 934) 00:10:13.937 3.674 - 3.698: 51.6612% ( 854) 00:10:13.937 3.698 - 3.721: 56.8153% ( 681) 00:10:13.937 3.721 - 3.745: 60.5691% ( 496) 00:10:13.937 3.745 - 3.769: 63.5889% ( 399) 00:10:13.937 3.769 - 3.793: 66.3362% ( 363) 00:10:13.937 3.793 - 3.816: 69.4392% ( 410) 00:10:13.937 3.816 - 3.840: 73.0568% ( 478) 00:10:13.937 3.840 - 3.864: 77.2270% ( 551) 00:10:13.937 3.864 - 3.887: 80.8068% ( 473) 00:10:13.937 3.887 - 3.911: 83.6979% ( 382) 00:10:13.937 3.911 - 3.935: 85.9078% ( 292) 00:10:13.937 3.935 - 3.959: 87.5728% ( 220) 00:10:13.937 3.959 - 3.982: 88.9276% ( 179) 00:10:13.937 3.982 - 4.006: 90.0855% ( 153) 00:10:13.937 4.006 - 4.030: 91.1148% ( 136) 00:10:13.937 4.030 - 4.053: 91.9625% ( 112) 00:10:13.937 4.053 - 4.077: 92.7420% ( 103) 00:10:13.937 4.077 - 4.101: 93.7107% ( 128) 00:10:13.937 4.101 - 4.124: 94.5508% ( 111) 00:10:13.937 4.124 - 4.148: 95.1790% ( 83) 00:10:13.937 4.148 - 4.172: 95.4817% ( 40) 00:10:13.937 4.172 - 4.196: 95.8072% ( 43) 00:10:13.937 4.196 - 4.219: 96.0115% ( 27) 00:10:13.937 4.219 - 4.243: 96.1553% ( 19) 00:10:13.937 4.243 - 4.267: 96.2537% ( 13) 00:10:13.937 4.267 - 4.290: 96.3748% ( 16) 00:10:13.937 4.290 - 4.314: 96.5489% ( 23) 00:10:13.937 4.314 - 4.338: 96.6548% ( 14) 00:10:13.937 4.338 - 4.361: 96.7305% ( 10) 00:10:13.937 4.361 - 4.385: 96.7835% ( 7) 00:10:13.937 4.385 - 4.409: 96.8213% ( 5) 00:10:13.937 4.409 - 4.433: 96.8743% ( 7) 00:10:13.937 4.433 - 4.456: 96.9500% ( 10) 00:10:13.937 4.456 - 4.480: 96.9878% ( 5) 00:10:13.937 4.504 - 4.527: 96.9954% ( 1) 00:10:13.937 4.527 - 4.551: 97.0105% ( 2) 00:10:13.937 4.551 - 4.575: 97.0332% ( 3) 00:10:13.937 4.575 - 4.599: 97.0484% ( 2) 00:10:13.937 4.599 - 4.622: 97.0711% ( 3) 00:10:13.937 4.622 - 4.646: 97.0862% ( 2) 00:10:13.937 4.646 - 4.670: 97.0938% ( 1) 00:10:13.937 4.670 - 4.693: 97.1316% ( 5) 00:10:13.937 4.693 - 4.717: 97.1695% ( 5) 00:10:13.937 4.717 - 4.741: 97.2300% ( 8) 00:10:13.937 4.741 - 4.764: 97.3133% ( 11) 00:10:13.937 4.764 - 4.788: 97.3738% ( 8) 00:10:13.937 4.788 - 4.812: 97.4041% ( 4) 00:10:13.937 4.812 - 4.836: 97.4268% ( 3) 00:10:13.937 4.836 - 4.859: 97.4495% ( 3) 00:10:13.937 4.859 - 4.883: 97.4646% ( 2) 00:10:13.937 4.883 - 4.907: 97.4949% ( 4) 00:10:13.937 4.907 - 4.930: 97.5403% ( 6) 00:10:13.937 4.930 - 4.954: 97.5554% ( 2) 00:10:13.937 4.954 - 4.978: 97.5781% ( 3) 00:10:13.937 4.978 - 5.001: 97.6160% ( 5) 00:10:13.937 5.001 - 5.025: 97.6311% ( 2) 00:10:13.937 5.025 - 5.049: 97.6463% ( 2) 00:10:13.937 5.049 - 5.073: 97.6765% ( 4) 00:10:13.937 5.073 - 5.096: 97.6992% ( 3) 00:10:13.937 5.096 - 5.120: 97.7295% ( 4) 00:10:13.937 5.120 - 5.144: 97.7446% ( 2) 00:10:13.937 5.144 - 5.167: 97.7598% ( 2) 00:10:13.937 5.167 - 5.191: 97.7749% ( 2) 00:10:13.937 5.215 - 5.239: 97.7825% ( 1) 00:10:13.937 5.333 - 5.357: 97.7901% ( 1) 00:10:13.937 5.428 - 5.452: 97.7976% ( 1) 00:10:13.937 5.452 - 5.476: 97.8052% ( 1) 00:10:13.937 5.476 - 5.499: 97.8128% ( 1) 00:10:13.937 5.547 - 5.570: 97.8203% ( 1) 00:10:13.937 5.641 - 5.665: 97.8279% ( 1) 00:10:13.937 5.831 - 5.855: 97.8355% ( 1) 00:10:13.937 5.879 - 5.902: 97.8506% ( 2) 00:10:13.937 5.902 - 5.926: 97.8582% ( 1) 00:10:13.937 5.950 - 5.973: 97.8733% ( 2) 00:10:13.937 5.973 - 5.997: 97.8809% ( 1) 00:10:13.937 6.116 - 6.163: 97.9036% ( 3) 00:10:13.937 6.258 - 6.305: 97.9111% ( 1) 00:10:13.937 6.305 - 6.353: 97.9187% ( 1) 00:10:13.937 6.447 - 6.495: 97.9414% ( 3) 00:10:13.937 6.542 - 6.590: 97.9490% ( 1) 00:10:13.937 6.590 - 6.637: 97.9566% ( 1) 00:10:13.937 6.637 - 6.684: 97.9641% ( 1) 00:10:13.937 6.732 - 6.779: 97.9717% ( 1) 00:10:13.937 6.827 - 6.874: 97.9868% ( 2) 00:10:13.937 7.016 - 7.064: 97.9944% ( 1) 00:10:13.937 7.301 - 7.348: 98.0095% ( 2) 00:10:13.937 7.396 - 7.443: 98.0171% ( 1) 00:10:13.937 7.490 - 7.538: 98.0322% ( 2) 00:10:13.937 7.538 - 7.585: 98.0474% ( 2) 00:10:13.937 7.585 - 7.633: 98.0625% ( 2) 00:10:13.937 7.633 - 7.680: 98.0701% ( 1) 00:10:13.937 7.775 - 7.822: 98.0777% ( 1) 00:10:13.937 7.870 - 7.917: 98.0852% ( 1) 00:10:13.937 7.917 - 7.964: 98.1004% ( 2) 00:10:13.937 7.964 - 8.012: 98.1079% ( 1) 00:10:13.937 8.012 - 8.059: 98.1155% ( 1) 00:10:13.937 8.059 - 8.107: 98.1231% ( 1) 00:10:13.937 8.107 - 8.154: 98.1306% ( 1) 00:10:13.937 8.154 - 8.201: 98.1382% ( 1) 00:10:13.937 8.201 - 8.249: 98.1685% ( 4) 00:10:13.937 8.296 - 8.344: 98.1760% ( 1) 00:10:13.937 8.391 - 8.439: 98.1912% ( 2) 00:10:13.937 8.486 - 8.533: 98.2063% ( 2) 00:10:13.937 8.533 - 8.581: 98.2139% ( 1) 00:10:13.937 8.628 - 8.676: 98.2366% ( 3) 00:10:13.937 8.676 - 8.723: 98.2442% ( 1) 00:10:13.937 8.770 - 8.818: 98.2517% ( 1) 00:10:13.937 8.818 - 8.865: 98.2669% ( 2) 00:10:13.937 8.913 - 8.960: 98.2744% ( 1) 00:10:13.937 8.960 - 9.007: 98.2820% ( 1) 00:10:13.937 9.007 - 9.055: 98.3047% ( 3) 00:10:13.937 9.055 - 9.102: 98.3123% ( 1) 00:10:13.937 9.102 - 9.150: 98.3274% ( 2) 00:10:13.937 9.150 - 9.197: 98.3350% ( 1) 00:10:13.937 9.434 - 9.481: 98.3577% ( 3) 00:10:13.937 9.481 - 9.529: 98.3728% ( 2) 00:10:13.937 9.529 - 9.576: 98.3804% ( 1) 00:10:13.937 9.576 - 9.624: 98.3880% ( 1) 00:10:13.937 9.624 - 9.671: 98.3955% ( 1) 00:10:13.937 9.719 - 9.766: 98.4031% ( 1) 00:10:13.937 9.813 - 9.861: 98.4182% ( 2) 00:10:13.937 9.956 - 10.003: 98.4258% ( 1) 00:10:13.937 10.003 - 10.050: 98.4409% ( 2) 00:10:13.937 10.050 - 10.098: 98.4485% ( 1) 00:10:13.937 10.098 - 10.145: 98.4561% ( 1) 00:10:13.937 10.240 - 10.287: 98.4636% ( 1) 00:10:13.937 10.382 - 10.430: 98.4788% ( 2) 00:10:13.937 10.430 - 10.477: 98.4863% ( 1) 00:10:13.937 10.477 - 10.524: 98.5090% ( 3) 00:10:13.937 10.524 - 10.572: 98.5166% ( 1) 00:10:13.937 10.619 - 10.667: 98.5242% ( 1) 00:10:13.937 10.667 - 10.714: 98.5317% ( 1) 00:10:13.937 10.714 - 10.761: 98.5393% ( 1) 00:10:13.937 10.999 - 11.046: 98.5620% ( 3) 00:10:13.937 11.046 - 11.093: 98.5696% ( 1) 00:10:13.937 11.093 - 11.141: 98.5772% ( 1) 00:10:13.937 11.236 - 11.283: 98.5847% ( 1) 00:10:13.937 11.425 - 11.473: 98.5999% ( 2) 00:10:13.937 11.473 - 11.520: 98.6150% ( 2) 00:10:13.937 11.567 - 11.615: 98.6226% ( 1) 00:10:13.937 11.615 - 11.662: 98.6377% ( 2) 00:10:13.937 11.757 - 11.804: 98.6453% ( 1) 00:10:13.937 11.899 - 11.947: 98.6528% ( 1) 00:10:13.937 12.041 - 12.089: 98.6604% ( 1) 00:10:13.937 12.089 - 12.136: 98.6680% ( 1) 00:10:13.937 12.231 - 12.326: 98.6907% ( 3) 00:10:13.937 12.326 - 12.421: 98.7134% ( 3) 00:10:13.937 12.421 - 12.516: 98.7361% ( 3) 00:10:13.937 12.516 - 12.610: 98.7512% ( 2) 00:10:13.937 12.610 - 12.705: 98.7588% ( 1) 00:10:13.937 12.705 - 12.800: 98.7664% ( 1) 00:10:13.937 12.800 - 12.895: 98.7739% ( 1) 00:10:13.937 12.990 - 13.084: 98.7815% ( 1) 00:10:13.937 13.084 - 13.179: 98.7966% ( 2) 00:10:13.937 13.274 - 13.369: 98.8269% ( 4) 00:10:13.937 13.464 - 13.559: 98.8420% ( 2) 00:10:13.937 13.843 - 13.938: 98.8496% ( 1) 00:10:13.937 13.938 - 14.033: 98.8648% ( 2) 00:10:13.937 14.127 - 14.222: 98.8723% ( 1) 00:10:13.937 14.222 - 14.317: 98.8950% ( 3) 00:10:13.937 14.507 - 14.601: 98.9026% ( 1) 00:10:13.937 14.601 - 14.696: 98.9102% ( 1) 00:10:13.937 14.981 - 15.076: 98.9177% ( 1) 00:10:13.937 15.076 - 15.170: 98.9253% ( 1) 00:10:13.937 15.834 - 15.929: 98.9329% ( 1) 00:10:13.937 17.351 - 17.446: 98.9556% ( 3) 00:10:13.937 17.446 - 17.541: 98.9783% ( 3) 00:10:13.937 17.541 - 17.636: 99.0313% ( 7) 00:10:13.937 17.636 - 17.730: 99.1221% ( 12) 00:10:13.937 17.730 - 17.825: 99.1978% ( 10) 00:10:13.937 17.825 - 17.920: 99.2356% ( 5) 00:10:13.937 17.920 - 18.015: 99.2659% ( 4) 00:10:13.937 18.015 - 18.110: 99.3189% ( 7) 00:10:13.937 18.110 - 18.204: 99.3870% ( 9) 00:10:13.937 18.204 - 18.299: 99.4097% ( 3) 00:10:13.937 18.299 - 18.394: 99.4399% ( 4) 00:10:13.937 18.394 - 18.489: 99.5156% ( 10) 00:10:13.937 18.489 - 18.584: 99.5686% ( 7) 00:10:13.937 18.584 - 18.679: 99.6064% ( 5) 00:10:13.937 18.679 - 18.773: 99.6519% ( 6) 00:10:13.937 18.773 - 18.868: 99.6670% ( 2) 00:10:13.937 18.868 - 18.963: 99.6897% ( 3) 00:10:13.938 18.963 - 19.058: 99.7048% ( 2) 00:10:13.938 19.058 - 19.153: 99.7275% ( 3) 00:10:13.938 19.153 - 19.247: 99.7502% ( 3) 00:10:13.938 19.247 - 19.342: 99.7578% ( 1) 00:10:13.938 19.342 - 19.437: 99.7654% ( 1) 00:10:13.938 19.437 - 19.532: 99.7730% ( 1) 00:10:13.938 19.721 - 19.816: 99.7805% ( 1) 00:10:13.938 20.101 - 20.196: 99.7881% ( 1) 00:10:13.938 20.954 - 21.049: 99.7957% ( 1) 00:10:13.938 22.092 - 22.187: 99.8032% ( 1) 00:10:13.938 22.281 - 22.376: 99.8108% ( 1) 00:10:13.938 22.945 - 23.040: 99.8184% ( 1) 00:10:13.938 24.083 - 24.178: 99.8259% ( 1) 00:10:13.938 24.273 - 24.462: 99.8335% ( 1) 00:10:13.938 24.841 - 25.031: 99.8411% ( 1) 00:10:13.938 25.031 - 25.221: 99.8486% ( 1) 00:10:13.938 25.221 - 25.410: 99.8562% ( 1) 00:10:13.938 25.600 - 25.790: 99.8638% ( 1) 00:10:13.938 28.065 - 28.255: 99.8713% ( 1) 00:10:13.938 28.824 - 29.013: 99.8789% ( 1) 00:10:13.938 29.393 - 29.582: 99.8865% ( 1) 00:10:13.938 3980.705 - 4004.978: 99.9697% ( 11) 00:10:13.938 4004.978 - 4029.250: 100.0000% ( 4) 00:10:13.938 00:10:13.938 Complete histogram 00:10:13.938 ================== 00:10:13.938 Range in us Cumulative Count 00:10:13.938 2.039 - 2.050: 5.8881% ( 778) 00:10:13.938 2.050 - 2.062: 12.5861% ( 885) 00:10:13.938 2.062 - 2.074: 14.5009% ( 253) 00:10:13.938 2.074 - 2.086: 45.2433% ( 4062) 00:10:13.938 2.086 - 2.098: 58.7149% ( 1780) 00:10:13.938 2.098 - 2.110: 61.7044% ( 395) 00:10:13.938 2.110 - 2.121: 65.7610% ( 536) 00:10:13.938 2.121 - 2.133: 66.6086% ( 112) 00:10:13.938 2.133 - 2.145: 68.9245% ( 306) 00:10:13.938 2.145 - 2.157: 77.7189% ( 1162) 00:10:13.938 2.157 - 2.169: 80.5722% ( 377) 00:10:13.938 2.169 - 2.181: 81.4425% ( 115) 00:10:13.938 2.181 - 2.193: 82.7972% ( 179) 00:10:13.938 2.193 - 2.204: 83.5465% ( 99) 00:10:13.938 2.204 - 2.216: 84.8861% ( 177) 00:10:13.938 2.216 - 2.228: 89.5633% ( 618) 00:10:13.938 2.228 - 2.240: 92.0608% ( 330) 00:10:13.938 2.240 - 2.252: 93.1204% ( 140) 00:10:13.938 2.252 - 2.264: 93.5594% ( 58) 00:10:13.938 2.264 - 2.276: 93.8318% ( 36) 00:10:13.938 2.276 - 2.287: 94.1270% ( 39) 00:10:13.938 2.287 - 2.299: 94.3313% ( 27) 00:10:13.938 2.299 - 2.311: 94.7098% ( 50) 00:10:13.938 2.311 - 2.323: 95.2395% ( 70) 00:10:13.938 2.323 - 2.335: 95.3455% ( 14) 00:10:13.938 2.335 - 2.347: 95.4363% ( 12) 00:10:13.938 2.347 - 2.359: 95.6407% ( 27) 00:10:13.938 2.359 - 2.370: 95.8828% ( 32) 00:10:13.938 2.370 - 2.382: 96.2688% ( 51) 00:10:13.938 2.382 - 2.394: 96.8819% ( 81) 00:10:13.938 2.394 - 2.406: 97.2451% ( 48) 00:10:13.938 2.406 - 2.418: 97.5327% ( 38) 00:10:13.938 2.418 - 2.430: 97.7068% ( 23) 00:10:13.938 2.430 - 2.441: 97.8960% ( 25) 00:10:13.938 2.441 - 2.453: 97.9717% ( 10) 00:10:13.938 2.453 - 2.465: 98.0928% ( 16) 00:10:13.938 2.465 - 2.477: 98.1987% ( 14) 00:10:13.938 2.477 - 2.489: 98.2290% ( 4) 00:10:13.938 2.489 - 2.501: 9[2024-04-16 12:38:12.956196] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:13.938 8.2971% ( 9) 00:10:13.938 2.501 - 2.513: 98.3350% ( 5) 00:10:13.938 2.513 - 2.524: 98.3577% ( 3) 00:10:13.938 2.524 - 2.536: 98.3804% ( 3) 00:10:13.938 2.536 - 2.548: 98.4107% ( 4) 00:10:13.938 2.548 - 2.560: 98.4334% ( 3) 00:10:13.938 2.560 - 2.572: 98.4409% ( 1) 00:10:13.938 2.572 - 2.584: 98.4485% ( 1) 00:10:13.938 2.596 - 2.607: 98.4561% ( 1) 00:10:13.938 2.631 - 2.643: 98.4636% ( 1) 00:10:13.938 2.655 - 2.667: 98.4788% ( 2) 00:10:13.938 2.667 - 2.679: 98.4863% ( 1) 00:10:13.938 2.690 - 2.702: 98.4939% ( 1) 00:10:13.938 2.726 - 2.738: 98.5015% ( 1) 00:10:13.938 2.738 - 2.750: 98.5166% ( 2) 00:10:13.938 2.785 - 2.797: 98.5242% ( 1) 00:10:13.938 2.844 - 2.856: 98.5317% ( 1) 00:10:13.938 2.939 - 2.951: 98.5393% ( 1) 00:10:13.938 3.200 - 3.224: 98.5469% ( 1) 00:10:13.938 3.224 - 3.247: 98.5545% ( 1) 00:10:13.938 3.247 - 3.271: 98.5620% ( 1) 00:10:13.938 3.271 - 3.295: 98.5772% ( 2) 00:10:13.938 3.295 - 3.319: 98.5847% ( 1) 00:10:13.938 3.319 - 3.342: 98.5923% ( 1) 00:10:13.938 3.342 - 3.366: 98.5999% ( 1) 00:10:13.938 3.366 - 3.390: 98.6150% ( 2) 00:10:13.938 3.390 - 3.413: 98.6377% ( 3) 00:10:13.938 3.413 - 3.437: 98.6528% ( 2) 00:10:13.938 3.437 - 3.461: 98.6604% ( 1) 00:10:13.938 3.461 - 3.484: 98.6680% ( 1) 00:10:13.938 3.484 - 3.508: 98.6831% ( 2) 00:10:13.938 3.508 - 3.532: 98.6983% ( 2) 00:10:13.938 3.579 - 3.603: 98.7058% ( 1) 00:10:13.938 3.603 - 3.627: 98.7210% ( 2) 00:10:13.938 3.627 - 3.650: 98.7285% ( 1) 00:10:13.938 3.650 - 3.674: 98.7361% ( 1) 00:10:13.938 3.721 - 3.745: 98.7512% ( 2) 00:10:13.938 3.769 - 3.793: 98.7588% ( 1) 00:10:13.938 3.816 - 3.840: 98.7664% ( 1) 00:10:13.938 3.840 - 3.864: 98.7815% ( 2) 00:10:13.938 5.499 - 5.523: 98.7891% ( 1) 00:10:13.938 6.353 - 6.400: 98.7966% ( 1) 00:10:13.938 6.447 - 6.495: 98.8042% ( 1) 00:10:13.938 6.637 - 6.684: 98.8118% ( 1) 00:10:13.938 6.921 - 6.969: 98.8193% ( 1) 00:10:13.938 7.016 - 7.064: 98.8345% ( 2) 00:10:13.938 7.206 - 7.253: 98.8420% ( 1) 00:10:13.938 7.301 - 7.348: 98.8496% ( 1) 00:10:13.938 7.348 - 7.396: 98.8648% ( 2) 00:10:13.938 7.396 - 7.443: 98.8723% ( 1) 00:10:13.938 7.443 - 7.490: 98.8799% ( 1) 00:10:13.938 7.490 - 7.538: 98.8875% ( 1) 00:10:13.938 7.538 - 7.585: 98.9026% ( 2) 00:10:13.938 8.201 - 8.249: 98.9102% ( 1) 00:10:13.938 10.287 - 10.335: 98.9177% ( 1) 00:10:13.938 10.714 - 10.761: 98.9253% ( 1) 00:10:13.938 15.550 - 15.644: 98.9480% ( 3) 00:10:13.938 15.644 - 15.739: 98.9556% ( 1) 00:10:13.938 15.834 - 15.929: 98.9707% ( 2) 00:10:13.938 15.929 - 16.024: 98.9934% ( 3) 00:10:13.938 16.024 - 16.119: 99.0010% ( 1) 00:10:13.938 16.119 - 16.213: 99.0161% ( 2) 00:10:13.938 16.213 - 16.308: 99.0540% ( 5) 00:10:13.938 16.308 - 16.403: 99.0842% ( 4) 00:10:13.938 16.403 - 16.498: 99.1145% ( 4) 00:10:13.938 16.498 - 16.593: 99.1448% ( 4) 00:10:13.938 16.593 - 16.687: 99.2129% ( 9) 00:10:13.938 16.687 - 16.782: 99.2507% ( 5) 00:10:13.938 16.782 - 16.877: 99.3189% ( 9) 00:10:13.938 16.877 - 16.972: 99.3416% ( 3) 00:10:13.938 16.972 - 17.067: 99.3643% ( 3) 00:10:13.938 17.067 - 17.161: 99.3794% ( 2) 00:10:13.938 17.351 - 17.446: 99.3870% ( 1) 00:10:13.938 17.730 - 17.825: 99.3945% ( 1) 00:10:13.938 17.825 - 17.920: 99.4021% ( 1) 00:10:13.938 18.204 - 18.299: 99.4097% ( 1) 00:10:13.938 18.299 - 18.394: 99.4172% ( 1) 00:10:13.938 18.394 - 18.489: 99.4248% ( 1) 00:10:13.938 18.679 - 18.773: 99.4324% ( 1) 00:10:13.938 20.575 - 20.670: 99.4399% ( 1) 00:10:13.938 22.187 - 22.281: 99.4475% ( 1) 00:10:13.938 22.661 - 22.756: 99.4551% ( 1) 00:10:13.938 23.040 - 23.135: 99.4627% ( 1) 00:10:13.938 3980.705 - 4004.978: 99.8562% ( 52) 00:10:13.938 4004.978 - 4029.250: 100.0000% ( 19) 00:10:13.938 00:10:14.196 12:38:13 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:14.196 12:38:13 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:14.196 12:38:13 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:14.196 12:38:13 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:14.196 12:38:13 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.453 [2024-04-16 12:38:13.275949] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:10:14.453 [ 00:10:14.453 { 00:10:14.453 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.453 "subtype": "Discovery", 00:10:14.453 "listen_addresses": [], 00:10:14.453 "allow_any_host": true, 00:10:14.453 "hosts": [] 00:10:14.453 }, 00:10:14.453 { 00:10:14.453 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:14.453 "subtype": "NVMe", 00:10:14.453 "listen_addresses": [ 00:10:14.453 { 00:10:14.453 "transport": "VFIOUSER", 00:10:14.453 "trtype": "VFIOUSER", 00:10:14.453 "adrfam": "IPv4", 00:10:14.453 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:14.453 "trsvcid": "0" 00:10:14.453 } 00:10:14.453 ], 00:10:14.453 "allow_any_host": true, 00:10:14.453 "hosts": [], 00:10:14.453 "serial_number": "SPDK1", 00:10:14.453 "model_number": "SPDK bdev Controller", 00:10:14.453 "max_namespaces": 32, 00:10:14.453 "min_cntlid": 1, 00:10:14.453 "max_cntlid": 65519, 00:10:14.453 "namespaces": [ 00:10:14.453 { 00:10:14.453 "nsid": 1, 00:10:14.453 "bdev_name": "Malloc1", 00:10:14.453 "name": "Malloc1", 00:10:14.453 "nguid": "546F23316BF1433B8D9C7333C741F1C9", 00:10:14.453 "uuid": "546f2331-6bf1-433b-8d9c-7333c741f1c9" 00:10:14.453 } 00:10:14.453 ] 00:10:14.453 }, 00:10:14.453 { 00:10:14.453 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:14.453 "subtype": "NVMe", 00:10:14.453 "listen_addresses": [ 00:10:14.453 { 00:10:14.453 "transport": "VFIOUSER", 00:10:14.453 "trtype": "VFIOUSER", 00:10:14.453 "adrfam": "IPv4", 00:10:14.453 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:14.453 "trsvcid": "0" 00:10:14.453 } 00:10:14.453 ], 00:10:14.453 "allow_any_host": true, 00:10:14.453 "hosts": [], 00:10:14.453 "serial_number": "SPDK2", 00:10:14.453 "model_number": "SPDK bdev Controller", 00:10:14.453 "max_namespaces": 32, 00:10:14.453 "min_cntlid": 1, 00:10:14.453 "max_cntlid": 65519, 00:10:14.453 "namespaces": [ 00:10:14.453 { 00:10:14.453 "nsid": 1, 00:10:14.453 "bdev_name": "Malloc2", 00:10:14.454 "name": "Malloc2", 00:10:14.454 "nguid": "DFE3FBFD4A7D42F99740B2971F134720", 00:10:14.454 "uuid": "dfe3fbfd-4a7d-42f9-9740-b2971f134720" 00:10:14.454 } 00:10:14.454 ] 00:10:14.454 } 00:10:14.454 ] 00:10:14.454 12:38:13 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:14.454 12:38:13 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1127915 00:10:14.454 12:38:13 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:14.454 12:38:13 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:14.454 12:38:13 -- common/autotest_common.sh@1251 -- # local i=0 00:10:14.454 12:38:13 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.454 12:38:13 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.454 12:38:13 -- common/autotest_common.sh@1262 -- # return 0 00:10:14.454 12:38:13 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:14.454 12:38:13 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:14.454 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.454 [2024-04-16 12:38:13.466453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:14.711 Malloc3 00:10:14.711 12:38:13 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:14.968 [2024-04-16 12:38:13.806043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:14.968 12:38:13 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.968 Asynchronous Event Request test 00:10:14.968 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.968 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.968 Registering asynchronous event callbacks... 00:10:14.968 Starting namespace attribute notice tests for all controllers... 00:10:14.968 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:14.968 aer_cb - Changed Namespace 00:10:14.968 Cleaning up... 00:10:15.228 [ 00:10:15.228 { 00:10:15.228 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:15.228 "subtype": "Discovery", 00:10:15.228 "listen_addresses": [], 00:10:15.228 "allow_any_host": true, 00:10:15.228 "hosts": [] 00:10:15.228 }, 00:10:15.228 { 00:10:15.228 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:15.228 "subtype": "NVMe", 00:10:15.228 "listen_addresses": [ 00:10:15.228 { 00:10:15.228 "transport": "VFIOUSER", 00:10:15.228 "trtype": "VFIOUSER", 00:10:15.228 "adrfam": "IPv4", 00:10:15.228 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:15.228 "trsvcid": "0" 00:10:15.228 } 00:10:15.228 ], 00:10:15.228 "allow_any_host": true, 00:10:15.228 "hosts": [], 00:10:15.228 "serial_number": "SPDK1", 00:10:15.228 "model_number": "SPDK bdev Controller", 00:10:15.228 "max_namespaces": 32, 00:10:15.228 "min_cntlid": 1, 00:10:15.228 "max_cntlid": 65519, 00:10:15.228 "namespaces": [ 00:10:15.228 { 00:10:15.228 "nsid": 1, 00:10:15.228 "bdev_name": "Malloc1", 00:10:15.228 "name": "Malloc1", 00:10:15.228 "nguid": "546F23316BF1433B8D9C7333C741F1C9", 00:10:15.228 "uuid": "546f2331-6bf1-433b-8d9c-7333c741f1c9" 00:10:15.228 }, 00:10:15.228 { 00:10:15.228 "nsid": 2, 00:10:15.228 "bdev_name": "Malloc3", 00:10:15.228 "name": "Malloc3", 00:10:15.228 "nguid": "BB8BD17FDB65421CAEDF7A93D55310D0", 00:10:15.228 "uuid": "bb8bd17f-db65-421c-aedf-7a93d55310d0" 00:10:15.228 } 00:10:15.228 ] 00:10:15.228 }, 00:10:15.228 { 00:10:15.228 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:15.228 "subtype": "NVMe", 00:10:15.228 "listen_addresses": [ 00:10:15.228 { 00:10:15.228 "transport": "VFIOUSER", 00:10:15.228 "trtype": "VFIOUSER", 00:10:15.228 "adrfam": "IPv4", 00:10:15.228 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:15.228 "trsvcid": "0" 00:10:15.228 } 00:10:15.228 ], 00:10:15.228 "allow_any_host": true, 00:10:15.228 "hosts": [], 00:10:15.228 "serial_number": "SPDK2", 00:10:15.228 "model_number": "SPDK bdev Controller", 00:10:15.228 "max_namespaces": 32, 00:10:15.228 "min_cntlid": 1, 00:10:15.228 "max_cntlid": 65519, 00:10:15.228 "namespaces": [ 00:10:15.228 { 00:10:15.228 "nsid": 1, 00:10:15.228 "bdev_name": "Malloc2", 00:10:15.228 "name": "Malloc2", 00:10:15.228 "nguid": "DFE3FBFD4A7D42F99740B2971F134720", 00:10:15.228 "uuid": "dfe3fbfd-4a7d-42f9-9740-b2971f134720" 00:10:15.228 } 00:10:15.228 ] 00:10:15.228 } 00:10:15.228 ] 00:10:15.228 12:38:14 -- target/nvmf_vfio_user.sh@44 -- # wait 1127915 00:10:15.228 12:38:14 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:15.228 12:38:14 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:15.228 12:38:14 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:15.229 12:38:14 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:15.229 [2024-04-16 12:38:14.075029] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:10:15.229 [2024-04-16 12:38:14.075066] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128051 ] 00:10:15.229 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.229 [2024-04-16 12:38:14.108658] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:15.229 [2024-04-16 12:38:14.118610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:15.229 [2024-04-16 12:38:14.118641] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff916d14000 00:10:15.229 [2024-04-16 12:38:14.119613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.229 [2024-04-16 12:38:14.120628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.229 [2024-04-16 12:38:14.121623] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.229 [2024-04-16 12:38:14.122631] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.229 [2024-04-16 12:38:14.123635] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.229 [2024-04-16 12:38:14.124655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.229 [2024-04-16 12:38:14.125669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.229 [2024-04-16 12:38:14.126674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.229 [2024-04-16 12:38:14.127685] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:15.229 [2024-04-16 12:38:14.127710] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff916d09000 00:10:15.229 [2024-04-16 12:38:14.128835] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:15.229 [2024-04-16 12:38:14.144245] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:15.229 [2024-04-16 12:38:14.144279] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:15.229 [2024-04-16 12:38:14.149396] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:15.229 [2024-04-16 12:38:14.149446] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:15.229 [2024-04-16 12:38:14.149528] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:15.229 [2024-04-16 12:38:14.149581] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:15.229 [2024-04-16 12:38:14.149594] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:15.229 [2024-04-16 12:38:14.150403] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:15.229 [2024-04-16 12:38:14.150423] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:15.229 [2024-04-16 12:38:14.150435] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:15.229 [2024-04-16 12:38:14.151406] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:15.229 [2024-04-16 12:38:14.151426] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:15.229 [2024-04-16 12:38:14.151440] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:15.229 [2024-04-16 12:38:14.152409] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:15.229 [2024-04-16 12:38:14.152429] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:15.229 [2024-04-16 12:38:14.153410] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:15.229 [2024-04-16 12:38:14.153429] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:15.229 [2024-04-16 12:38:14.153439] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:15.229 [2024-04-16 12:38:14.153450] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:15.229 [2024-04-16 12:38:14.153560] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:15.229 [2024-04-16 12:38:14.153576] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:15.229 [2024-04-16 12:38:14.153585] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:15.229 [2024-04-16 12:38:14.154420] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:15.229 [2024-04-16 12:38:14.155425] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:15.229 [2024-04-16 12:38:14.156438] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:15.229 [2024-04-16 12:38:14.157436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.229 [2024-04-16 12:38:14.157518] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:15.229 [2024-04-16 12:38:14.158456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:15.229 [2024-04-16 12:38:14.158476] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:15.229 [2024-04-16 12:38:14.158485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:15.229 [2024-04-16 12:38:14.158513] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:15.229 [2024-04-16 12:38:14.158530] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:15.229 [2024-04-16 12:38:14.158572] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.229 [2024-04-16 12:38:14.158585] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.229 [2024-04-16 12:38:14.158602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.229 [2024-04-16 12:38:14.166577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:15.229 [2024-04-16 12:38:14.166599] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:15.229 [2024-04-16 12:38:14.166608] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:15.229 [2024-04-16 12:38:14.166631] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:15.229 [2024-04-16 12:38:14.166639] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:15.229 [2024-04-16 12:38:14.166648] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:15.229 [2024-04-16 12:38:14.166656] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:15.229 [2024-04-16 12:38:14.166665] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:15.229 [2024-04-16 12:38:14.166678] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:15.229 [2024-04-16 12:38:14.166694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:15.229 [2024-04-16 12:38:14.174574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:15.229 [2024-04-16 12:38:14.174604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.229 [2024-04-16 12:38:14.174619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.229 [2024-04-16 12:38:14.174631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.229 [2024-04-16 12:38:14.174643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.229 [2024-04-16 12:38:14.174652] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:15.229 [2024-04-16 12:38:14.174668] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:15.229 [2024-04-16 12:38:14.174682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:15.229 [2024-04-16 12:38:14.182576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:15.229 [2024-04-16 12:38:14.182593] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:15.229 [2024-04-16 12:38:14.182607] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:15.229 [2024-04-16 12:38:14.182623] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.182634] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.182648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.190576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.190636] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.190651] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.190664] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:15.230 [2024-04-16 12:38:14.190673] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:15.230 [2024-04-16 12:38:14.190683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.198576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.198598] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:15.230 [2024-04-16 12:38:14.198615] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.198629] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.198642] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.230 [2024-04-16 12:38:14.198651] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.230 [2024-04-16 12:38:14.198660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.206575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.206602] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.206618] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.206631] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.230 [2024-04-16 12:38:14.206640] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.230 [2024-04-16 12:38:14.206650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.214575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.214597] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.214610] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.214628] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.214639] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.214647] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.214656] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:15.230 [2024-04-16 12:38:14.214664] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:15.230 [2024-04-16 12:38:14.214672] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:15.230 [2024-04-16 12:38:14.214696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.222587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.222615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.230573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.230598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.238576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.238600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.246572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.246599] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:15.230 [2024-04-16 12:38:14.246609] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:15.230 [2024-04-16 12:38:14.246616] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:15.230 [2024-04-16 12:38:14.246622] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:15.230 [2024-04-16 12:38:14.246631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:15.230 [2024-04-16 12:38:14.246643] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:15.230 [2024-04-16 12:38:14.246652] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:15.230 [2024-04-16 12:38:14.246661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.246672] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:15.230 [2024-04-16 12:38:14.246680] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.230 [2024-04-16 12:38:14.246689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.246701] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:15.230 [2024-04-16 12:38:14.246709] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:15.230 [2024-04-16 12:38:14.246725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:15.230 [2024-04-16 12:38:14.254576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.254604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.254620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:15.230 [2024-04-16 12:38:14.254632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:15.230 ===================================================== 00:10:15.230 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:15.230 ===================================================== 00:10:15.230 Controller Capabilities/Features 00:10:15.230 ================================ 00:10:15.230 Vendor ID: 4e58 00:10:15.230 Subsystem Vendor ID: 4e58 00:10:15.230 Serial Number: SPDK2 00:10:15.230 Model Number: SPDK bdev Controller 00:10:15.230 Firmware Version: 24.05 00:10:15.230 Recommended Arb Burst: 6 00:10:15.230 IEEE OUI Identifier: 8d 6b 50 00:10:15.230 Multi-path I/O 00:10:15.230 May have multiple subsystem ports: Yes 00:10:15.230 May have multiple controllers: Yes 00:10:15.230 Associated with SR-IOV VF: No 00:10:15.230 Max Data Transfer Size: 131072 00:10:15.230 Max Number of Namespaces: 32 00:10:15.230 Max Number of I/O Queues: 127 00:10:15.230 NVMe Specification Version (VS): 1.3 00:10:15.230 NVMe Specification Version (Identify): 1.3 00:10:15.230 Maximum Queue Entries: 256 00:10:15.230 Contiguous Queues Required: Yes 00:10:15.230 Arbitration Mechanisms Supported 00:10:15.230 Weighted Round Robin: Not Supported 00:10:15.230 Vendor Specific: Not Supported 00:10:15.230 Reset Timeout: 15000 ms 00:10:15.230 Doorbell Stride: 4 bytes 00:10:15.230 NVM Subsystem Reset: Not Supported 00:10:15.230 Command Sets Supported 00:10:15.230 NVM Command Set: Supported 00:10:15.230 Boot Partition: Not Supported 00:10:15.230 Memory Page Size Minimum: 4096 bytes 00:10:15.230 Memory Page Size Maximum: 4096 bytes 00:10:15.230 Persistent Memory Region: Not Supported 00:10:15.230 Optional Asynchronous Events Supported 00:10:15.230 Namespace Attribute Notices: Supported 00:10:15.230 Firmware Activation Notices: Not Supported 00:10:15.230 ANA Change Notices: Not Supported 00:10:15.230 PLE Aggregate Log Change Notices: Not Supported 00:10:15.230 LBA Status Info Alert Notices: Not Supported 00:10:15.230 EGE Aggregate Log Change Notices: Not Supported 00:10:15.230 Normal NVM Subsystem Shutdown event: Not Supported 00:10:15.230 Zone Descriptor Change Notices: Not Supported 00:10:15.230 Discovery Log Change Notices: Not Supported 00:10:15.230 Controller Attributes 00:10:15.230 128-bit Host Identifier: Supported 00:10:15.230 Non-Operational Permissive Mode: Not Supported 00:10:15.230 NVM Sets: Not Supported 00:10:15.230 Read Recovery Levels: Not Supported 00:10:15.230 Endurance Groups: Not Supported 00:10:15.230 Predictable Latency Mode: Not Supported 00:10:15.230 Traffic Based Keep ALive: Not Supported 00:10:15.230 Namespace Granularity: Not Supported 00:10:15.230 SQ Associations: Not Supported 00:10:15.231 UUID List: Not Supported 00:10:15.231 Multi-Domain Subsystem: Not Supported 00:10:15.231 Fixed Capacity Management: Not Supported 00:10:15.231 Variable Capacity Management: Not Supported 00:10:15.231 Delete Endurance Group: Not Supported 00:10:15.231 Delete NVM Set: Not Supported 00:10:15.231 Extended LBA Formats Supported: Not Supported 00:10:15.231 Flexible Data Placement Supported: Not Supported 00:10:15.231 00:10:15.231 Controller Memory Buffer Support 00:10:15.231 ================================ 00:10:15.231 Supported: No 00:10:15.231 00:10:15.231 Persistent Memory Region Support 00:10:15.231 ================================ 00:10:15.231 Supported: No 00:10:15.231 00:10:15.231 Admin Command Set Attributes 00:10:15.231 ============================ 00:10:15.231 Security Send/Receive: Not Supported 00:10:15.231 Format NVM: Not Supported 00:10:15.231 Firmware Activate/Download: Not Supported 00:10:15.231 Namespace Management: Not Supported 00:10:15.231 Device Self-Test: Not Supported 00:10:15.231 Directives: Not Supported 00:10:15.231 NVMe-MI: Not Supported 00:10:15.231 Virtualization Management: Not Supported 00:10:15.231 Doorbell Buffer Config: Not Supported 00:10:15.231 Get LBA Status Capability: Not Supported 00:10:15.231 Command & Feature Lockdown Capability: Not Supported 00:10:15.231 Abort Command Limit: 4 00:10:15.231 Async Event Request Limit: 4 00:10:15.231 Number of Firmware Slots: N/A 00:10:15.231 Firmware Slot 1 Read-Only: N/A 00:10:15.231 Firmware Activation Without Reset: N/A 00:10:15.231 Multiple Update Detection Support: N/A 00:10:15.231 Firmware Update Granularity: No Information Provided 00:10:15.231 Per-Namespace SMART Log: No 00:10:15.231 Asymmetric Namespace Access Log Page: Not Supported 00:10:15.231 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:15.231 Command Effects Log Page: Supported 00:10:15.231 Get Log Page Extended Data: Supported 00:10:15.231 Telemetry Log Pages: Not Supported 00:10:15.231 Persistent Event Log Pages: Not Supported 00:10:15.231 Supported Log Pages Log Page: May Support 00:10:15.231 Commands Supported & Effects Log Page: Not Supported 00:10:15.231 Feature Identifiers & Effects Log Page:May Support 00:10:15.231 NVMe-MI Commands & Effects Log Page: May Support 00:10:15.231 Data Area 4 for Telemetry Log: Not Supported 00:10:15.231 Error Log Page Entries Supported: 128 00:10:15.231 Keep Alive: Supported 00:10:15.231 Keep Alive Granularity: 10000 ms 00:10:15.231 00:10:15.231 NVM Command Set Attributes 00:10:15.231 ========================== 00:10:15.231 Submission Queue Entry Size 00:10:15.231 Max: 64 00:10:15.231 Min: 64 00:10:15.231 Completion Queue Entry Size 00:10:15.231 Max: 16 00:10:15.231 Min: 16 00:10:15.231 Number of Namespaces: 32 00:10:15.231 Compare Command: Supported 00:10:15.231 Write Uncorrectable Command: Not Supported 00:10:15.231 Dataset Management Command: Supported 00:10:15.231 Write Zeroes Command: Supported 00:10:15.231 Set Features Save Field: Not Supported 00:10:15.231 Reservations: Not Supported 00:10:15.231 Timestamp: Not Supported 00:10:15.231 Copy: Supported 00:10:15.231 Volatile Write Cache: Present 00:10:15.231 Atomic Write Unit (Normal): 1 00:10:15.231 Atomic Write Unit (PFail): 1 00:10:15.231 Atomic Compare & Write Unit: 1 00:10:15.231 Fused Compare & Write: Supported 00:10:15.231 Scatter-Gather List 00:10:15.231 SGL Command Set: Supported (Dword aligned) 00:10:15.231 SGL Keyed: Not Supported 00:10:15.231 SGL Bit Bucket Descriptor: Not Supported 00:10:15.231 SGL Metadata Pointer: Not Supported 00:10:15.231 Oversized SGL: Not Supported 00:10:15.231 SGL Metadata Address: Not Supported 00:10:15.231 SGL Offset: Not Supported 00:10:15.231 Transport SGL Data Block: Not Supported 00:10:15.231 Replay Protected Memory Block: Not Supported 00:10:15.231 00:10:15.231 Firmware Slot Information 00:10:15.231 ========================= 00:10:15.231 Active slot: 1 00:10:15.231 Slot 1 Firmware Revision: 24.05 00:10:15.231 00:10:15.231 00:10:15.231 Commands Supported and Effects 00:10:15.231 ============================== 00:10:15.231 Admin Commands 00:10:15.231 -------------- 00:10:15.231 Get Log Page (02h): Supported 00:10:15.231 Identify (06h): Supported 00:10:15.231 Abort (08h): Supported 00:10:15.231 Set Features (09h): Supported 00:10:15.231 Get Features (0Ah): Supported 00:10:15.231 Asynchronous Event Request (0Ch): Supported 00:10:15.231 Keep Alive (18h): Supported 00:10:15.231 I/O Commands 00:10:15.231 ------------ 00:10:15.231 Flush (00h): Supported LBA-Change 00:10:15.231 Write (01h): Supported LBA-Change 00:10:15.231 Read (02h): Supported 00:10:15.231 Compare (05h): Supported 00:10:15.231 Write Zeroes (08h): Supported LBA-Change 00:10:15.231 Dataset Management (09h): Supported LBA-Change 00:10:15.231 Copy (19h): Supported LBA-Change 00:10:15.231 Unknown (79h): Supported LBA-Change 00:10:15.231 Unknown (7Ah): Supported 00:10:15.231 00:10:15.231 Error Log 00:10:15.231 ========= 00:10:15.231 00:10:15.231 Arbitration 00:10:15.231 =========== 00:10:15.231 Arbitration Burst: 1 00:10:15.231 00:10:15.231 Power Management 00:10:15.231 ================ 00:10:15.231 Number of Power States: 1 00:10:15.231 Current Power State: Power State #0 00:10:15.231 Power State #0: 00:10:15.231 Max Power: 0.00 W 00:10:15.231 Non-Operational State: Operational 00:10:15.231 Entry Latency: Not Reported 00:10:15.231 Exit Latency: Not Reported 00:10:15.231 Relative Read Throughput: 0 00:10:15.231 Relative Read Latency: 0 00:10:15.231 Relative Write Throughput: 0 00:10:15.231 Relative Write Latency: 0 00:10:15.231 Idle Power: Not Reported 00:10:15.231 Active Power: Not Reported 00:10:15.231 Non-Operational Permissive Mode: Not Supported 00:10:15.231 00:10:15.231 Health Information 00:10:15.231 ================== 00:10:15.231 Critical Warnings: 00:10:15.231 Available Spare Space: OK 00:10:15.231 Temperature: OK 00:10:15.231 Device Reliability: OK 00:10:15.231 Read Only: No 00:10:15.231 Volatile Memory Backup: OK 00:10:15.231 Current Temperature: 0 Kelvin (-2[2024-04-16 12:38:14.254763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:15.231 [2024-04-16 12:38:14.262575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:15.231 [2024-04-16 12:38:14.262621] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:15.231 [2024-04-16 12:38:14.262637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.231 [2024-04-16 12:38:14.262648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.231 [2024-04-16 12:38:14.262658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.231 [2024-04-16 12:38:14.262668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.231 [2024-04-16 12:38:14.262749] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:15.231 [2024-04-16 12:38:14.262770] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:15.231 [2024-04-16 12:38:14.263755] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.231 [2024-04-16 12:38:14.263840] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:15.231 [2024-04-16 12:38:14.263869] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:15.231 [2024-04-16 12:38:14.264765] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:15.231 [2024-04-16 12:38:14.264790] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:15.231 [2024-04-16 12:38:14.264843] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:15.231 [2024-04-16 12:38:14.266071] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:15.489 73 Celsius) 00:10:15.489 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:15.489 Available Spare: 0% 00:10:15.489 Available Spare Threshold: 0% 00:10:15.489 Life Percentage Used: 0% 00:10:15.489 Data Units Read: 0 00:10:15.489 Data Units Written: 0 00:10:15.489 Host Read Commands: 0 00:10:15.489 Host Write Commands: 0 00:10:15.489 Controller Busy Time: 0 minutes 00:10:15.489 Power Cycles: 0 00:10:15.489 Power On Hours: 0 hours 00:10:15.489 Unsafe Shutdowns: 0 00:10:15.489 Unrecoverable Media Errors: 0 00:10:15.489 Lifetime Error Log Entries: 0 00:10:15.489 Warning Temperature Time: 0 minutes 00:10:15.489 Critical Temperature Time: 0 minutes 00:10:15.489 00:10:15.489 Number of Queues 00:10:15.489 ================ 00:10:15.489 Number of I/O Submission Queues: 127 00:10:15.489 Number of I/O Completion Queues: 127 00:10:15.489 00:10:15.489 Active Namespaces 00:10:15.489 ================= 00:10:15.489 Namespace ID:1 00:10:15.489 Error Recovery Timeout: Unlimited 00:10:15.489 Command Set Identifier: NVM (00h) 00:10:15.489 Deallocate: Supported 00:10:15.489 Deallocated/Unwritten Error: Not Supported 00:10:15.489 Deallocated Read Value: Unknown 00:10:15.489 Deallocate in Write Zeroes: Not Supported 00:10:15.489 Deallocated Guard Field: 0xFFFF 00:10:15.489 Flush: Supported 00:10:15.489 Reservation: Supported 00:10:15.489 Namespace Sharing Capabilities: Multiple Controllers 00:10:15.490 Size (in LBAs): 131072 (0GiB) 00:10:15.490 Capacity (in LBAs): 131072 (0GiB) 00:10:15.490 Utilization (in LBAs): 131072 (0GiB) 00:10:15.490 NGUID: DFE3FBFD4A7D42F99740B2971F134720 00:10:15.490 UUID: dfe3fbfd-4a7d-42f9-9740-b2971f134720 00:10:15.490 Thin Provisioning: Not Supported 00:10:15.490 Per-NS Atomic Units: Yes 00:10:15.490 Atomic Boundary Size (Normal): 0 00:10:15.490 Atomic Boundary Size (PFail): 0 00:10:15.490 Atomic Boundary Offset: 0 00:10:15.490 Maximum Single Source Range Length: 65535 00:10:15.490 Maximum Copy Length: 65535 00:10:15.490 Maximum Source Range Count: 1 00:10:15.490 NGUID/EUI64 Never Reused: No 00:10:15.490 Namespace Write Protected: No 00:10:15.490 Number of LBA Formats: 1 00:10:15.490 Current LBA Format: LBA Format #00 00:10:15.490 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:15.490 00:10:15.490 12:38:14 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:15.490 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.490 [2024-04-16 12:38:14.493359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:20.754 [2024-04-16 12:38:19.597937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:20.754 Initializing NVMe Controllers 00:10:20.754 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:20.754 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:20.754 Initialization complete. Launching workers. 00:10:20.754 ======================================================== 00:10:20.754 Latency(us) 00:10:20.754 Device Information : IOPS MiB/s Average min max 00:10:20.754 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33517.11 130.93 3818.28 1201.33 7521.62 00:10:20.754 ======================================================== 00:10:20.754 Total : 33517.11 130.93 3818.28 1201.33 7521.62 00:10:20.754 00:10:20.754 12:38:19 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:20.754 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.012 [2024-04-16 12:38:19.840616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:26.279 [2024-04-16 12:38:24.859250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:26.279 Initializing NVMe Controllers 00:10:26.279 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:26.279 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:26.279 Initialization complete. Launching workers. 00:10:26.279 ======================================================== 00:10:26.279 Latency(us) 00:10:26.279 Device Information : IOPS MiB/s Average min max 00:10:26.279 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31905.94 124.63 4011.15 1217.34 9528.88 00:10:26.279 ======================================================== 00:10:26.279 Total : 31905.94 124.63 4011.15 1217.34 9528.88 00:10:26.279 00:10:26.279 12:38:24 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:26.279 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.279 [2024-04-16 12:38:25.082189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.542 [2024-04-16 12:38:30.231703] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.542 Initializing NVMe Controllers 00:10:31.542 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.542 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.542 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:31.542 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:31.542 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:31.542 Initialization complete. Launching workers. 00:10:31.542 Starting thread on core 2 00:10:31.542 Starting thread on core 3 00:10:31.542 Starting thread on core 1 00:10:31.542 12:38:30 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:31.542 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.542 [2024-04-16 12:38:30.542891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:34.828 [2024-04-16 12:38:33.606371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:34.828 Initializing NVMe Controllers 00:10:34.828 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.828 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.828 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:34.828 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:34.828 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:34.828 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:34.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:34.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:34.828 Initialization complete. Launching workers. 00:10:34.828 Starting thread on core 1 with urgent priority queue 00:10:34.828 Starting thread on core 2 with urgent priority queue 00:10:34.828 Starting thread on core 3 with urgent priority queue 00:10:34.828 Starting thread on core 0 with urgent priority queue 00:10:34.828 SPDK bdev Controller (SPDK2 ) core 0: 4207.67 IO/s 23.77 secs/100000 ios 00:10:34.829 SPDK bdev Controller (SPDK2 ) core 1: 4311.67 IO/s 23.19 secs/100000 ios 00:10:34.829 SPDK bdev Controller (SPDK2 ) core 2: 3111.33 IO/s 32.14 secs/100000 ios 00:10:34.829 SPDK bdev Controller (SPDK2 ) core 3: 3989.00 IO/s 25.07 secs/100000 ios 00:10:34.829 ======================================================== 00:10:34.829 00:10:34.829 12:38:33 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:34.829 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.087 [2024-04-16 12:38:33.923312] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:35.087 [2024-04-16 12:38:33.933386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:35.087 Initializing NVMe Controllers 00:10:35.087 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:35.087 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:35.087 Namespace ID: 1 size: 0GB 00:10:35.087 Initialization complete. 00:10:35.087 INFO: using host memory buffer for IO 00:10:35.087 Hello world! 00:10:35.087 12:38:33 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:35.087 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.345 [2024-04-16 12:38:34.243895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:36.277 Initializing NVMe Controllers 00:10:36.277 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.277 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.277 Initialization complete. Launching workers. 00:10:36.277 submit (in ns) avg, min, max = 7159.0, 3460.0, 4019080.0 00:10:36.277 complete (in ns) avg, min, max = 24702.4, 2040.0, 4016133.3 00:10:36.277 00:10:36.277 Submit histogram 00:10:36.277 ================ 00:10:36.277 Range in us Cumulative Count 00:10:36.277 3.437 - 3.461: 0.0074% ( 1) 00:10:36.277 3.461 - 3.484: 0.0444% ( 5) 00:10:36.277 3.484 - 3.508: 0.4363% ( 53) 00:10:36.277 3.508 - 3.532: 2.0410% ( 217) 00:10:36.277 3.532 - 3.556: 4.7770% ( 370) 00:10:36.277 3.556 - 3.579: 10.5746% ( 784) 00:10:36.277 3.579 - 3.603: 18.0877% ( 1016) 00:10:36.277 3.603 - 3.627: 28.4035% ( 1395) 00:10:36.277 3.627 - 3.650: 39.0002% ( 1433) 00:10:36.277 3.650 - 3.674: 48.1106% ( 1232) 00:10:36.277 3.674 - 3.698: 54.8251% ( 908) 00:10:36.277 3.698 - 3.721: 60.0902% ( 712) 00:10:36.277 3.721 - 3.745: 64.7120% ( 625) 00:10:36.277 3.745 - 3.769: 68.4464% ( 505) 00:10:36.277 3.769 - 3.793: 71.8258% ( 457) 00:10:36.277 3.793 - 3.816: 74.6062% ( 376) 00:10:36.277 3.816 - 3.840: 77.6085% ( 406) 00:10:36.277 3.840 - 3.864: 81.3651% ( 508) 00:10:36.277 3.864 - 3.887: 84.7889% ( 463) 00:10:36.277 3.887 - 3.911: 87.2218% ( 329) 00:10:36.277 3.911 - 3.935: 89.0631% ( 249) 00:10:36.277 3.935 - 3.959: 90.6160% ( 210) 00:10:36.277 3.959 - 3.982: 91.9766% ( 184) 00:10:36.277 3.982 - 4.006: 93.1080% ( 153) 00:10:36.277 4.006 - 4.030: 93.7218% ( 83) 00:10:36.277 4.030 - 4.053: 94.1359% ( 56) 00:10:36.277 4.053 - 4.077: 94.6092% ( 64) 00:10:36.277 4.077 - 4.101: 95.0677% ( 62) 00:10:36.277 4.101 - 4.124: 95.4374% ( 50) 00:10:36.277 4.124 - 4.148: 95.8885% ( 61) 00:10:36.277 4.148 - 4.172: 96.1917% ( 41) 00:10:36.277 4.172 - 4.196: 96.4061% ( 29) 00:10:36.277 4.196 - 4.219: 96.5984% ( 26) 00:10:36.277 4.219 - 4.243: 96.6945% ( 13) 00:10:36.277 4.243 - 4.267: 96.7833% ( 12) 00:10:36.277 4.267 - 4.290: 96.8720% ( 12) 00:10:36.277 4.290 - 4.314: 96.9681% ( 13) 00:10:36.277 4.314 - 4.338: 97.0938% ( 17) 00:10:36.277 4.338 - 4.361: 97.2269% ( 18) 00:10:36.277 4.361 - 4.385: 97.2935% ( 9) 00:10:36.277 4.385 - 4.409: 97.3379% ( 6) 00:10:36.277 4.409 - 4.433: 97.3748% ( 5) 00:10:36.277 4.433 - 4.456: 97.4044% ( 4) 00:10:36.277 4.456 - 4.480: 97.4266% ( 3) 00:10:36.277 4.480 - 4.504: 97.4562% ( 4) 00:10:36.277 4.504 - 4.527: 97.4710% ( 2) 00:10:36.277 4.527 - 4.551: 97.4932% ( 3) 00:10:36.277 4.551 - 4.575: 97.5153% ( 3) 00:10:36.277 4.693 - 4.717: 97.5227% ( 1) 00:10:36.277 4.717 - 4.741: 97.5375% ( 2) 00:10:36.277 4.764 - 4.788: 97.5597% ( 3) 00:10:36.277 4.788 - 4.812: 97.5819% ( 3) 00:10:36.277 4.812 - 4.836: 97.6115% ( 4) 00:10:36.277 4.836 - 4.859: 97.6485% ( 5) 00:10:36.277 4.859 - 4.883: 97.6780% ( 4) 00:10:36.277 4.883 - 4.907: 97.6854% ( 1) 00:10:36.277 4.907 - 4.930: 97.7520% ( 9) 00:10:36.277 4.930 - 4.954: 97.8185% ( 9) 00:10:36.277 4.954 - 4.978: 97.8481% ( 4) 00:10:36.277 4.978 - 5.001: 97.8703% ( 3) 00:10:36.277 5.001 - 5.025: 97.9368% ( 9) 00:10:36.277 5.025 - 5.049: 97.9590% ( 3) 00:10:36.277 5.049 - 5.073: 98.0034% ( 6) 00:10:36.277 5.073 - 5.096: 98.0404% ( 5) 00:10:36.277 5.096 - 5.120: 98.1069% ( 9) 00:10:36.277 5.120 - 5.144: 98.1439% ( 5) 00:10:36.277 5.144 - 5.167: 98.1513% ( 1) 00:10:36.277 5.167 - 5.191: 98.1883% ( 5) 00:10:36.277 5.191 - 5.215: 98.2252% ( 5) 00:10:36.277 5.239 - 5.262: 98.2770% ( 7) 00:10:36.277 5.262 - 5.286: 98.2992% ( 3) 00:10:36.277 5.286 - 5.310: 98.3066% ( 1) 00:10:36.277 5.333 - 5.357: 98.3214% ( 2) 00:10:36.277 5.381 - 5.404: 98.3288% ( 1) 00:10:36.277 5.428 - 5.452: 98.3362% ( 1) 00:10:36.277 5.499 - 5.523: 98.3436% ( 1) 00:10:36.277 5.665 - 5.689: 98.3510% ( 1) 00:10:36.277 5.689 - 5.713: 98.3584% ( 1) 00:10:36.277 5.713 - 5.736: 98.3657% ( 1) 00:10:36.277 5.760 - 5.784: 98.3731% ( 1) 00:10:36.277 5.926 - 5.950: 98.3805% ( 1) 00:10:36.277 5.973 - 5.997: 98.3879% ( 1) 00:10:36.277 6.021 - 6.044: 98.3953% ( 1) 00:10:36.277 6.068 - 6.116: 98.4027% ( 1) 00:10:36.277 6.210 - 6.258: 98.4175% ( 2) 00:10:36.277 6.353 - 6.400: 98.4249% ( 1) 00:10:36.277 6.400 - 6.447: 98.4323% ( 1) 00:10:36.277 6.447 - 6.495: 98.4397% ( 1) 00:10:36.277 6.542 - 6.590: 98.4471% ( 1) 00:10:36.277 6.590 - 6.637: 98.4545% ( 1) 00:10:36.277 7.064 - 7.111: 98.4619% ( 1) 00:10:36.277 7.253 - 7.301: 98.4693% ( 1) 00:10:36.277 7.348 - 7.396: 98.4767% ( 1) 00:10:36.277 7.396 - 7.443: 98.4915% ( 2) 00:10:36.277 7.585 - 7.633: 98.4989% ( 1) 00:10:36.277 7.633 - 7.680: 98.5136% ( 2) 00:10:36.277 7.680 - 7.727: 98.5210% ( 1) 00:10:36.277 7.822 - 7.870: 98.5358% ( 2) 00:10:36.277 7.917 - 7.964: 98.5432% ( 1) 00:10:36.277 7.964 - 8.012: 98.5580% ( 2) 00:10:36.277 8.059 - 8.107: 98.5654% ( 1) 00:10:36.277 8.107 - 8.154: 98.5876% ( 3) 00:10:36.277 8.201 - 8.249: 98.6098% ( 3) 00:10:36.277 8.344 - 8.391: 98.6172% ( 1) 00:10:36.277 8.391 - 8.439: 98.6320% ( 2) 00:10:36.277 8.439 - 8.486: 98.6394% ( 1) 00:10:36.277 8.533 - 8.581: 98.6541% ( 2) 00:10:36.277 8.676 - 8.723: 98.6615% ( 1) 00:10:36.277 8.723 - 8.770: 98.6689% ( 1) 00:10:36.277 8.818 - 8.865: 98.6763% ( 1) 00:10:36.277 8.865 - 8.913: 98.6911% ( 2) 00:10:36.277 8.913 - 8.960: 98.6985% ( 1) 00:10:36.277 8.960 - 9.007: 98.7207% ( 3) 00:10:36.277 9.007 - 9.055: 98.7355% ( 2) 00:10:36.277 9.055 - 9.102: 98.7429% ( 1) 00:10:36.277 9.150 - 9.197: 98.7651% ( 3) 00:10:36.277 9.197 - 9.244: 98.7799% ( 2) 00:10:36.277 9.339 - 9.387: 98.7873% ( 1) 00:10:36.277 9.387 - 9.434: 98.7946% ( 1) 00:10:36.277 9.481 - 9.529: 98.8020% ( 1) 00:10:36.277 9.529 - 9.576: 98.8168% ( 2) 00:10:36.278 9.576 - 9.624: 98.8316% ( 2) 00:10:36.278 9.624 - 9.671: 98.8390% ( 1) 00:10:36.278 9.766 - 9.813: 98.8464% ( 1) 00:10:36.278 9.813 - 9.861: 98.8538% ( 1) 00:10:36.278 10.050 - 10.098: 98.8612% ( 1) 00:10:36.278 10.098 - 10.145: 98.8686% ( 1) 00:10:36.278 10.430 - 10.477: 98.8760% ( 1) 00:10:36.278 10.524 - 10.572: 98.8834% ( 1) 00:10:36.278 10.619 - 10.667: 98.8908% ( 1) 00:10:36.278 10.856 - 10.904: 98.8982% ( 1) 00:10:36.278 10.951 - 10.999: 98.9056% ( 1) 00:10:36.278 10.999 - 11.046: 98.9130% ( 1) 00:10:36.278 11.141 - 11.188: 98.9204% ( 1) 00:10:36.278 11.330 - 11.378: 98.9278% ( 1) 00:10:36.278 11.378 - 11.425: 98.9351% ( 1) 00:10:36.278 11.520 - 11.567: 98.9425% ( 1) 00:10:36.278 11.567 - 11.615: 98.9499% ( 1) 00:10:36.278 11.710 - 11.757: 98.9647% ( 2) 00:10:36.278 11.994 - 12.041: 98.9795% ( 2) 00:10:36.278 12.516 - 12.610: 98.9869% ( 1) 00:10:36.278 13.084 - 13.179: 99.0017% ( 2) 00:10:36.278 13.274 - 13.369: 99.0091% ( 1) 00:10:36.278 13.369 - 13.464: 99.0165% ( 1) 00:10:36.278 13.559 - 13.653: 99.0239% ( 1) 00:10:36.278 13.653 - 13.748: 99.0387% ( 2) 00:10:36.278 13.748 - 13.843: 99.0535% ( 2) 00:10:36.278 14.222 - 14.317: 99.0609% ( 1) 00:10:36.278 14.317 - 14.412: 99.0756% ( 2) 00:10:36.278 14.696 - 14.791: 99.0830% ( 1) 00:10:36.278 16.308 - 16.403: 99.0904% ( 1) 00:10:36.278 17.067 - 17.161: 99.0978% ( 1) 00:10:36.278 17.161 - 17.256: 99.1126% ( 2) 00:10:36.278 17.256 - 17.351: 99.1200% ( 1) 00:10:36.278 17.351 - 17.446: 99.1348% ( 2) 00:10:36.278 17.446 - 17.541: 99.1644% ( 4) 00:10:36.278 17.541 - 17.636: 99.1792% ( 2) 00:10:36.278 17.636 - 17.730: 99.2457% ( 9) 00:10:36.278 17.730 - 17.825: 99.3197% ( 10) 00:10:36.278 17.825 - 17.920: 99.3714% ( 7) 00:10:36.278 17.920 - 18.015: 99.4158% ( 6) 00:10:36.278 18.015 - 18.110: 99.4380% ( 3) 00:10:36.278 18.110 - 18.204: 99.4898% ( 7) 00:10:36.278 18.204 - 18.299: 99.5267% ( 5) 00:10:36.278 18.299 - 18.394: 99.5489% ( 3) 00:10:36.278 18.394 - 18.489: 99.6155% ( 9) 00:10:36.278 18.489 - 18.584: 99.6598% ( 6) 00:10:36.278 18.584 - 18.679: 99.6820% ( 3) 00:10:36.278 18.679 - 18.773: 99.6968% ( 2) 00:10:36.278 18.773 - 18.868: 99.7190% ( 3) 00:10:36.278 18.868 - 18.963: 99.7486% ( 4) 00:10:36.278 18.963 - 19.058: 99.7634% ( 2) 00:10:36.278 19.058 - 19.153: 99.7708% ( 1) 00:10:36.278 19.153 - 19.247: 99.7929% ( 3) 00:10:36.278 19.342 - 19.437: 99.8151% ( 3) 00:10:36.278 19.816 - 19.911: 99.8299% ( 2) 00:10:36.278 20.385 - 20.480: 99.8373% ( 1) 00:10:36.278 20.575 - 20.670: 99.8447% ( 1) 00:10:36.278 22.092 - 22.187: 99.8521% ( 1) 00:10:36.278 22.187 - 22.281: 99.8595% ( 1) 00:10:36.278 22.945 - 23.040: 99.8669% ( 1) 00:10:36.278 23.988 - 24.083: 99.8743% ( 1) 00:10:36.278 25.031 - 25.221: 99.8817% ( 1) 00:10:36.278 25.221 - 25.410: 99.8891% ( 1) 00:10:36.278 27.686 - 27.876: 99.8965% ( 1) 00:10:36.278 28.255 - 28.444: 99.9039% ( 1) 00:10:36.278 29.013 - 29.203: 99.9113% ( 1) 00:10:36.278 29.961 - 30.151: 99.9187% ( 1) 00:10:36.278 3980.705 - 4004.978: 99.9630% ( 6) 00:10:36.278 4004.978 - 4029.250: 100.0000% ( 5) 00:10:36.278 00:10:36.278 Complete histogram 00:10:36.278 ================== 00:10:36.278 Range in us Cumulative Count 00:10:36.278 2.039 - 2.050: 4.0745% ( 551) 00:10:36.278 2.050 - 2.062: 10.7521% ( 903) 00:10:36.278 2.062 - 2.074: 12.3715% ( 219) 00:10:36.278 2.074 - 2.086: 40.5088% ( 3805) 00:10:36.278 2.086 - 2.098: 60.2529% ( 2670) 00:10:36.278 2.098 - 2.110: 63.0629% ( 380) 00:10:36.278 2.110 - 2.121: 67.0635% ( 541) 00:10:36.278 2.121 - 2.133: 68.3354% ( 172) 00:10:36.278 2.133 - 2.145: 70.6500% ( 313) 00:10:36.278 2.145 - 2.157: 83.1398% ( 1689) 00:10:36.278 2.157 - 2.169: 88.3310% ( 702) 00:10:36.278 2.169 - 2.181: 89.2184% ( 120) 00:10:36.278 2.181 - 2.193: 90.2389% ( 138) 00:10:36.278 2.193 - 2.204: 90.8156% ( 78) 00:10:36.278 2.204 - 2.216: 91.3703% ( 75) 00:10:36.278 2.216 - 2.228: 93.0785% ( 231) 00:10:36.278 2.228 - 2.240: 94.6979% ( 219) 00:10:36.278 2.240 - 2.252: 95.2969% ( 81) 00:10:36.278 2.252 - 2.264: 95.4818% ( 25) 00:10:36.278 2.264 - 2.276: 95.6001% ( 16) 00:10:36.278 2.276 - 2.287: 95.6814% ( 11) 00:10:36.278 2.287 - 2.299: 95.7924% ( 15) 00:10:36.278 2.299 - 2.311: 96.0881% ( 40) 00:10:36.278 2.311 - 2.323: 96.1991% ( 15) 00:10:36.278 2.323 - 2.335: 96.2730% ( 10) 00:10:36.278 2.335 - 2.347: 96.3174% ( 6) 00:10:36.278 2.347 - 2.359: 96.3839% ( 9) 00:10:36.278 2.359 - 2.370: 96.4949% ( 15) 00:10:36.278 2.370 - 2.382: 96.7463% ( 34) 00:10:36.278 2.382 - 2.394: 97.0421% ( 40) 00:10:36.278 2.394 - 2.406: 97.3527% ( 42) 00:10:36.278 2.406 - 2.418: 97.5375% ( 25) 00:10:36.278 2.418 - 2.430: 97.7298% ( 26) 00:10:36.278 2.430 - 2.441: 97.9590% ( 31) 00:10:36.278 2.441 - 2.453: 98.0478% ( 12) 00:10:36.278 2.453 - 2.465: 98.1735% ( 17) 00:10:36.278 2.465 - 2.477: 98.2622% ( 12) 00:10:36.278 2.477 - 2.489: 98.3288% ( 9) 00:10:36.278 2.489 - 2.501: 98.4027% ( 10) 00:10:36.278 2.501 - 2.513: 98.4175% ( 2) 00:10:36.278 2.513 - 2.524: 98.4249% ( 1) 00:10:36.278 2.524 - 2.536: 98.4693% ( 6) 00:10:36.278 2.536 - 2.548: 98.4841% ( 2) 00:10:36.278 2.548 - 2.560: 98.4989% ( 2) 00:10:36.278 2.572 - 2.584: 98.5136% ( 2) 00:10:36.278 2.596 - 2.607: 98.5210% ( 1) 00:10:36.278 2.607 - 2.619: 98.5284% ( 1) 00:10:36.278 2.631 - 2.643: 98.5432% ( 2) 00:10:36.278 2.655 - 2.667: 98.5506% ( 1) 00:10:36.278 2.679 - 2.690: 98.5580% ( 1) 00:10:36.278 2.702 - 2.714: 98.5654% ( 1) 00:10:36.278 2.726 - 2.738: 98.5728% ( 1) 00:10:36.278 2.750 - 2.761: 98.5802% ( 1) 00:10:36.278 2.856 - 2.868: 98.5876% ( 1) 00:10:36.278 3.484 - 3.508: 98.5950% ( 1) 00:10:36.278 3.532 - 3.556: 98.6024% ( 1) 00:10:36.278 3.556 - 3.579: 98.6320% ( 4) 00:10:36.278 3.579 - 3.603: 98.6467% ( 2) 00:10:36.278 3.603 - 3.627: 98.6615% ( 2) 00:10:36.278 3.627 - 3.650: 98.6689% ( 1) 00:10:36.278 3.674 - 3.698: 98.6763% ( 1) 00:10:36.278 3.698 - 3.721: 9[2024-04-16 12:38:35.338421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:36.536 8.6911% ( 2) 00:10:36.536 3.721 - 3.745: 98.6985% ( 1) 00:10:36.536 3.840 - 3.864: 98.7059% ( 1) 00:10:36.536 3.887 - 3.911: 98.7207% ( 2) 00:10:36.536 3.911 - 3.935: 98.7281% ( 1) 00:10:36.536 3.959 - 3.982: 98.7355% ( 1) 00:10:36.536 3.982 - 4.006: 98.7429% ( 1) 00:10:36.536 4.077 - 4.101: 98.7577% ( 2) 00:10:36.536 4.148 - 4.172: 98.7651% ( 1) 00:10:36.536 4.290 - 4.314: 98.7725% ( 1) 00:10:36.536 4.480 - 4.504: 98.7799% ( 1) 00:10:36.536 5.855 - 5.879: 98.7946% ( 2) 00:10:36.536 6.021 - 6.044: 98.8020% ( 1) 00:10:36.536 6.116 - 6.163: 98.8168% ( 2) 00:10:36.536 6.258 - 6.305: 98.8242% ( 1) 00:10:36.536 6.400 - 6.447: 98.8316% ( 1) 00:10:36.536 6.495 - 6.542: 98.8464% ( 2) 00:10:36.536 6.590 - 6.637: 98.8538% ( 1) 00:10:36.536 6.684 - 6.732: 98.8612% ( 1) 00:10:36.536 6.732 - 6.779: 98.8686% ( 1) 00:10:36.536 6.827 - 6.874: 98.8834% ( 2) 00:10:36.536 6.921 - 6.969: 98.8908% ( 1) 00:10:36.536 7.111 - 7.159: 98.8982% ( 1) 00:10:36.536 7.253 - 7.301: 98.9056% ( 1) 00:10:36.536 7.443 - 7.490: 98.9130% ( 1) 00:10:36.536 7.633 - 7.680: 98.9278% ( 2) 00:10:36.536 7.775 - 7.822: 98.9351% ( 1) 00:10:36.536 8.391 - 8.439: 98.9425% ( 1) 00:10:36.536 8.533 - 8.581: 98.9499% ( 1) 00:10:36.536 9.861 - 9.908: 98.9573% ( 1) 00:10:36.536 11.994 - 12.041: 98.9647% ( 1) 00:10:36.536 12.326 - 12.421: 98.9721% ( 1) 00:10:36.536 12.610 - 12.705: 98.9795% ( 1) 00:10:36.536 15.360 - 15.455: 98.9869% ( 1) 00:10:36.536 15.644 - 15.739: 99.0017% ( 2) 00:10:36.536 15.739 - 15.834: 99.0239% ( 3) 00:10:36.536 15.929 - 16.024: 99.0535% ( 4) 00:10:36.536 16.024 - 16.119: 99.0756% ( 3) 00:10:36.536 16.119 - 16.213: 99.0978% ( 3) 00:10:36.536 16.213 - 16.308: 99.1200% ( 3) 00:10:36.536 16.308 - 16.403: 99.1422% ( 3) 00:10:36.536 16.403 - 16.498: 99.1570% ( 2) 00:10:36.536 16.498 - 16.593: 99.1792% ( 3) 00:10:36.536 16.593 - 16.687: 99.2014% ( 3) 00:10:36.536 16.687 - 16.782: 99.2531% ( 7) 00:10:36.536 16.782 - 16.877: 99.2975% ( 6) 00:10:36.536 16.877 - 16.972: 99.3271% ( 4) 00:10:36.536 16.972 - 17.067: 99.3493% ( 3) 00:10:36.536 17.161 - 17.256: 99.3567% ( 1) 00:10:36.536 17.256 - 17.351: 99.3640% ( 1) 00:10:36.536 17.446 - 17.541: 99.3714% ( 1) 00:10:36.536 17.730 - 17.825: 99.3862% ( 2) 00:10:36.536 18.015 - 18.110: 99.3936% ( 1) 00:10:36.536 18.299 - 18.394: 99.4010% ( 1) 00:10:36.536 18.489 - 18.584: 99.4084% ( 1) 00:10:36.536 18.584 - 18.679: 99.4158% ( 1) 00:10:36.536 20.385 - 20.480: 99.4232% ( 1) 00:10:36.536 26.169 - 26.359: 99.4306% ( 1) 00:10:36.536 27.117 - 27.307: 99.4380% ( 1) 00:10:36.536 3980.705 - 4004.978: 99.7634% ( 44) 00:10:36.536 4004.978 - 4029.250: 100.0000% ( 32) 00:10:36.536 00:10:36.536 12:38:35 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:36.536 12:38:35 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:36.536 12:38:35 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:36.536 12:38:35 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:36.536 12:38:35 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:36.794 [ 00:10:36.794 { 00:10:36.794 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:36.794 "subtype": "Discovery", 00:10:36.794 "listen_addresses": [], 00:10:36.794 "allow_any_host": true, 00:10:36.794 "hosts": [] 00:10:36.794 }, 00:10:36.794 { 00:10:36.794 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:36.794 "subtype": "NVMe", 00:10:36.794 "listen_addresses": [ 00:10:36.794 { 00:10:36.794 "transport": "VFIOUSER", 00:10:36.794 "trtype": "VFIOUSER", 00:10:36.794 "adrfam": "IPv4", 00:10:36.794 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:36.794 "trsvcid": "0" 00:10:36.794 } 00:10:36.794 ], 00:10:36.794 "allow_any_host": true, 00:10:36.794 "hosts": [], 00:10:36.794 "serial_number": "SPDK1", 00:10:36.794 "model_number": "SPDK bdev Controller", 00:10:36.794 "max_namespaces": 32, 00:10:36.794 "min_cntlid": 1, 00:10:36.794 "max_cntlid": 65519, 00:10:36.794 "namespaces": [ 00:10:36.794 { 00:10:36.794 "nsid": 1, 00:10:36.794 "bdev_name": "Malloc1", 00:10:36.794 "name": "Malloc1", 00:10:36.794 "nguid": "546F23316BF1433B8D9C7333C741F1C9", 00:10:36.794 "uuid": "546f2331-6bf1-433b-8d9c-7333c741f1c9" 00:10:36.794 }, 00:10:36.794 { 00:10:36.794 "nsid": 2, 00:10:36.794 "bdev_name": "Malloc3", 00:10:36.794 "name": "Malloc3", 00:10:36.794 "nguid": "BB8BD17FDB65421CAEDF7A93D55310D0", 00:10:36.794 "uuid": "bb8bd17f-db65-421c-aedf-7a93d55310d0" 00:10:36.794 } 00:10:36.794 ] 00:10:36.794 }, 00:10:36.794 { 00:10:36.794 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:36.794 "subtype": "NVMe", 00:10:36.794 "listen_addresses": [ 00:10:36.794 { 00:10:36.794 "transport": "VFIOUSER", 00:10:36.794 "trtype": "VFIOUSER", 00:10:36.794 "adrfam": "IPv4", 00:10:36.794 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:36.794 "trsvcid": "0" 00:10:36.794 } 00:10:36.794 ], 00:10:36.794 "allow_any_host": true, 00:10:36.794 "hosts": [], 00:10:36.794 "serial_number": "SPDK2", 00:10:36.794 "model_number": "SPDK bdev Controller", 00:10:36.794 "max_namespaces": 32, 00:10:36.794 "min_cntlid": 1, 00:10:36.794 "max_cntlid": 65519, 00:10:36.794 "namespaces": [ 00:10:36.794 { 00:10:36.794 "nsid": 1, 00:10:36.794 "bdev_name": "Malloc2", 00:10:36.794 "name": "Malloc2", 00:10:36.794 "nguid": "DFE3FBFD4A7D42F99740B2971F134720", 00:10:36.794 "uuid": "dfe3fbfd-4a7d-42f9-9740-b2971f134720" 00:10:36.794 } 00:10:36.794 ] 00:10:36.794 } 00:10:36.794 ] 00:10:36.794 12:38:35 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:36.794 12:38:35 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1130576 00:10:36.794 12:38:35 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:36.794 12:38:35 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:36.794 12:38:35 -- common/autotest_common.sh@1251 -- # local i=0 00:10:36.794 12:38:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.794 12:38:35 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.794 12:38:35 -- common/autotest_common.sh@1262 -- # return 0 00:10:36.794 12:38:35 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:36.794 12:38:35 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:36.794 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.794 [2024-04-16 12:38:35.833116] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:37.052 Malloc4 00:10:37.052 12:38:35 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:37.309 [2024-04-16 12:38:36.186851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:37.309 12:38:36 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:37.309 Asynchronous Event Request test 00:10:37.309 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.309 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.309 Registering asynchronous event callbacks... 00:10:37.309 Starting namespace attribute notice tests for all controllers... 00:10:37.309 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:37.309 aer_cb - Changed Namespace 00:10:37.309 Cleaning up... 00:10:37.566 [ 00:10:37.566 { 00:10:37.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:37.566 "subtype": "Discovery", 00:10:37.566 "listen_addresses": [], 00:10:37.566 "allow_any_host": true, 00:10:37.566 "hosts": [] 00:10:37.566 }, 00:10:37.566 { 00:10:37.566 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:37.566 "subtype": "NVMe", 00:10:37.566 "listen_addresses": [ 00:10:37.566 { 00:10:37.566 "transport": "VFIOUSER", 00:10:37.566 "trtype": "VFIOUSER", 00:10:37.566 "adrfam": "IPv4", 00:10:37.566 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:37.566 "trsvcid": "0" 00:10:37.566 } 00:10:37.566 ], 00:10:37.566 "allow_any_host": true, 00:10:37.566 "hosts": [], 00:10:37.566 "serial_number": "SPDK1", 00:10:37.566 "model_number": "SPDK bdev Controller", 00:10:37.566 "max_namespaces": 32, 00:10:37.566 "min_cntlid": 1, 00:10:37.566 "max_cntlid": 65519, 00:10:37.566 "namespaces": [ 00:10:37.566 { 00:10:37.566 "nsid": 1, 00:10:37.566 "bdev_name": "Malloc1", 00:10:37.566 "name": "Malloc1", 00:10:37.566 "nguid": "546F23316BF1433B8D9C7333C741F1C9", 00:10:37.566 "uuid": "546f2331-6bf1-433b-8d9c-7333c741f1c9" 00:10:37.566 }, 00:10:37.566 { 00:10:37.566 "nsid": 2, 00:10:37.566 "bdev_name": "Malloc3", 00:10:37.566 "name": "Malloc3", 00:10:37.566 "nguid": "BB8BD17FDB65421CAEDF7A93D55310D0", 00:10:37.566 "uuid": "bb8bd17f-db65-421c-aedf-7a93d55310d0" 00:10:37.566 } 00:10:37.566 ] 00:10:37.566 }, 00:10:37.566 { 00:10:37.566 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:37.566 "subtype": "NVMe", 00:10:37.566 "listen_addresses": [ 00:10:37.566 { 00:10:37.566 "transport": "VFIOUSER", 00:10:37.566 "trtype": "VFIOUSER", 00:10:37.566 "adrfam": "IPv4", 00:10:37.566 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:37.566 "trsvcid": "0" 00:10:37.566 } 00:10:37.566 ], 00:10:37.566 "allow_any_host": true, 00:10:37.566 "hosts": [], 00:10:37.566 "serial_number": "SPDK2", 00:10:37.566 "model_number": "SPDK bdev Controller", 00:10:37.566 "max_namespaces": 32, 00:10:37.566 "min_cntlid": 1, 00:10:37.566 "max_cntlid": 65519, 00:10:37.566 "namespaces": [ 00:10:37.566 { 00:10:37.566 "nsid": 1, 00:10:37.566 "bdev_name": "Malloc2", 00:10:37.566 "name": "Malloc2", 00:10:37.566 "nguid": "DFE3FBFD4A7D42F99740B2971F134720", 00:10:37.566 "uuid": "dfe3fbfd-4a7d-42f9-9740-b2971f134720" 00:10:37.566 }, 00:10:37.566 { 00:10:37.566 "nsid": 2, 00:10:37.566 "bdev_name": "Malloc4", 00:10:37.566 "name": "Malloc4", 00:10:37.566 "nguid": "D9D88EF83C0C4E9AAFC73C5216C72771", 00:10:37.566 "uuid": "d9d88ef8-3c0c-4e9a-afc7-3c5216c72771" 00:10:37.566 } 00:10:37.566 ] 00:10:37.566 } 00:10:37.566 ] 00:10:37.566 12:38:36 -- target/nvmf_vfio_user.sh@44 -- # wait 1130576 00:10:37.566 12:38:36 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:37.566 12:38:36 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1124830 00:10:37.566 12:38:36 -- common/autotest_common.sh@936 -- # '[' -z 1124830 ']' 00:10:37.566 12:38:36 -- common/autotest_common.sh@940 -- # kill -0 1124830 00:10:37.566 12:38:36 -- common/autotest_common.sh@941 -- # uname 00:10:37.566 12:38:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:37.566 12:38:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1124830 00:10:37.566 12:38:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:37.566 12:38:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:37.566 12:38:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1124830' 00:10:37.566 killing process with pid 1124830 00:10:37.566 12:38:36 -- common/autotest_common.sh@955 -- # kill 1124830 00:10:37.566 [2024-04-16 12:38:36.469037] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:37.566 12:38:36 -- common/autotest_common.sh@960 -- # wait 1124830 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1130716 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1130716' 00:10:37.825 Process pid: 1130716 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:37.825 12:38:36 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1130716 00:10:37.825 12:38:36 -- common/autotest_common.sh@817 -- # '[' -z 1130716 ']' 00:10:37.825 12:38:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.825 12:38:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:37.825 12:38:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.825 12:38:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:37.825 12:38:36 -- common/autotest_common.sh@10 -- # set +x 00:10:38.084 [2024-04-16 12:38:36.908327] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:38.084 [2024-04-16 12:38:36.909348] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:10:38.084 [2024-04-16 12:38:36.909402] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.084 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.084 [2024-04-16 12:38:36.981728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.084 [2024-04-16 12:38:37.088185] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.084 [2024-04-16 12:38:37.088241] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.084 [2024-04-16 12:38:37.088271] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.084 [2024-04-16 12:38:37.088294] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.084 [2024-04-16 12:38:37.088305] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.084 [2024-04-16 12:38:37.088443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.084 [2024-04-16 12:38:37.088508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.084 [2024-04-16 12:38:37.088588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.084 [2024-04-16 12:38:37.088592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.342 [2024-04-16 12:38:37.191247] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:10:38.342 [2024-04-16 12:38:37.191535] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:10:38.342 [2024-04-16 12:38:37.191789] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:10:38.342 [2024-04-16 12:38:37.192479] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:38.342 [2024-04-16 12:38:37.192606] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:10:38.342 12:38:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:38.342 12:38:37 -- common/autotest_common.sh@850 -- # return 0 00:10:38.342 12:38:37 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:39.313 12:38:38 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:39.571 12:38:38 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:39.571 12:38:38 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:39.571 12:38:38 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:39.571 12:38:38 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:39.571 12:38:38 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:39.830 Malloc1 00:10:39.830 12:38:38 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:40.089 12:38:38 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:40.347 12:38:39 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:40.604 12:38:39 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:40.604 12:38:39 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:40.604 12:38:39 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:40.861 Malloc2 00:10:40.861 12:38:39 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:41.119 12:38:39 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:41.376 12:38:40 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:41.634 12:38:40 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:41.634 12:38:40 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1130716 00:10:41.634 12:38:40 -- common/autotest_common.sh@936 -- # '[' -z 1130716 ']' 00:10:41.634 12:38:40 -- common/autotest_common.sh@940 -- # kill -0 1130716 00:10:41.634 12:38:40 -- common/autotest_common.sh@941 -- # uname 00:10:41.634 12:38:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:41.634 12:38:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1130716 00:10:41.634 12:38:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:41.634 12:38:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:41.634 12:38:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1130716' 00:10:41.634 killing process with pid 1130716 00:10:41.634 12:38:40 -- common/autotest_common.sh@955 -- # kill 1130716 00:10:41.634 12:38:40 -- common/autotest_common.sh@960 -- # wait 1130716 00:10:41.892 12:38:40 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:41.892 12:38:40 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:41.892 00:10:41.892 real 0m54.259s 00:10:41.892 user 3m34.260s 00:10:41.892 sys 0m4.483s 00:10:41.892 12:38:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:41.892 12:38:40 -- common/autotest_common.sh@10 -- # set +x 00:10:41.892 ************************************ 00:10:41.892 END TEST nvmf_vfio_user 00:10:41.892 ************************************ 00:10:41.892 12:38:40 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:41.892 12:38:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:41.892 12:38:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.892 12:38:40 -- common/autotest_common.sh@10 -- # set +x 00:10:42.150 ************************************ 00:10:42.150 START TEST nvmf_vfio_user_nvme_compliance 00:10:42.150 ************************************ 00:10:42.150 12:38:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:42.150 * Looking for test storage... 00:10:42.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:42.150 12:38:41 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.150 12:38:41 -- nvmf/common.sh@7 -- # uname -s 00:10:42.150 12:38:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.150 12:38:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.150 12:38:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.150 12:38:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.150 12:38:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.150 12:38:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.150 12:38:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.150 12:38:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.150 12:38:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.150 12:38:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.150 12:38:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:42.150 12:38:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:42.150 12:38:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.150 12:38:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.150 12:38:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.150 12:38:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.150 12:38:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.150 12:38:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.150 12:38:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.150 12:38:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.150 12:38:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.150 12:38:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.150 12:38:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.150 12:38:41 -- paths/export.sh@5 -- # export PATH 00:10:42.150 12:38:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.150 12:38:41 -- nvmf/common.sh@47 -- # : 0 00:10:42.150 12:38:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.150 12:38:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.150 12:38:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.150 12:38:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.150 12:38:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.150 12:38:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.150 12:38:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.150 12:38:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.150 12:38:41 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.150 12:38:41 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.150 12:38:41 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:42.150 12:38:41 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:42.150 12:38:41 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:42.150 12:38:41 -- compliance/compliance.sh@20 -- # nvmfpid=1131324 00:10:42.150 12:38:41 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:42.150 12:38:41 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1131324' 00:10:42.150 Process pid: 1131324 00:10:42.150 12:38:41 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:42.150 12:38:41 -- compliance/compliance.sh@24 -- # waitforlisten 1131324 00:10:42.150 12:38:41 -- common/autotest_common.sh@817 -- # '[' -z 1131324 ']' 00:10:42.150 12:38:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.150 12:38:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:42.150 12:38:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.151 12:38:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:42.151 12:38:41 -- common/autotest_common.sh@10 -- # set +x 00:10:42.151 [2024-04-16 12:38:41.074114] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:10:42.151 [2024-04-16 12:38:41.074207] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.151 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.151 [2024-04-16 12:38:41.146325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.409 [2024-04-16 12:38:41.253629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.409 [2024-04-16 12:38:41.253686] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.409 [2024-04-16 12:38:41.253716] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.409 [2024-04-16 12:38:41.253729] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.409 [2024-04-16 12:38:41.253740] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.409 [2024-04-16 12:38:41.256585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.409 [2024-04-16 12:38:41.256654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.409 [2024-04-16 12:38:41.256658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.409 12:38:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:42.409 12:38:41 -- common/autotest_common.sh@850 -- # return 0 00:10:42.409 12:38:41 -- compliance/compliance.sh@26 -- # sleep 1 00:10:43.342 12:38:42 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:43.342 12:38:42 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:43.342 12:38:42 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:43.342 12:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.342 12:38:42 -- common/autotest_common.sh@10 -- # set +x 00:10:43.342 12:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.342 12:38:42 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:43.342 12:38:42 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:43.342 12:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.342 12:38:42 -- common/autotest_common.sh@10 -- # set +x 00:10:43.600 malloc0 00:10:43.600 12:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.600 12:38:42 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:43.600 12:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.600 12:38:42 -- common/autotest_common.sh@10 -- # set +x 00:10:43.600 12:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.600 12:38:42 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:43.600 12:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.600 12:38:42 -- common/autotest_common.sh@10 -- # set +x 00:10:43.600 12:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.600 12:38:42 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:43.600 12:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.600 12:38:42 -- common/autotest_common.sh@10 -- # set +x 00:10:43.600 12:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.600 12:38:42 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:43.600 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.600 00:10:43.600 00:10:43.600 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.600 http://cunit.sourceforge.net/ 00:10:43.600 00:10:43.600 00:10:43.600 Suite: nvme_compliance 00:10:43.600 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-16 12:38:42.614350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.600 [2024-04-16 12:38:42.615809] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:43.600 [2024-04-16 12:38:42.615835] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:43.600 [2024-04-16 12:38:42.615848] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:43.600 [2024-04-16 12:38:42.617374] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.600 passed 00:10:43.857 Test: admin_identify_ctrlr_verify_fused ...[2024-04-16 12:38:42.701988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.857 [2024-04-16 12:38:42.707023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.857 passed 00:10:43.857 Test: admin_identify_ns ...[2024-04-16 12:38:42.792124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.857 [2024-04-16 12:38:42.852578] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:43.857 [2024-04-16 12:38:42.861582] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:43.857 [2024-04-16 12:38:42.882698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.857 passed 00:10:44.115 Test: admin_get_features_mandatory_features ...[2024-04-16 12:38:42.965678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.115 [2024-04-16 12:38:42.968698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.115 passed 00:10:44.115 Test: admin_get_features_optional_features ...[2024-04-16 12:38:43.053250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.115 [2024-04-16 12:38:43.056270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.115 passed 00:10:44.115 Test: admin_set_features_number_of_queues ...[2024-04-16 12:38:43.141197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.373 [2024-04-16 12:38:43.246704] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.373 passed 00:10:44.373 Test: admin_get_log_page_mandatory_logs ...[2024-04-16 12:38:43.327353] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.373 [2024-04-16 12:38:43.332394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.373 passed 00:10:44.373 Test: admin_get_log_page_with_lpo ...[2024-04-16 12:38:43.415178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.631 [2024-04-16 12:38:43.483592] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:44.631 [2024-04-16 12:38:43.496669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.631 passed 00:10:44.631 Test: fabric_property_get ...[2024-04-16 12:38:43.580427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.631 [2024-04-16 12:38:43.581722] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:44.631 [2024-04-16 12:38:43.583450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.631 passed 00:10:44.631 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-16 12:38:43.668043] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.631 [2024-04-16 12:38:43.669332] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:44.631 [2024-04-16 12:38:43.671060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.889 passed 00:10:44.889 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-16 12:38:43.753270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.889 [2024-04-16 12:38:43.836590] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:44.889 [2024-04-16 12:38:43.852581] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:44.889 [2024-04-16 12:38:43.857672] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.889 passed 00:10:44.889 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-16 12:38:43.941317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.889 [2024-04-16 12:38:43.942648] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:44.889 [2024-04-16 12:38:43.944335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.147 passed 00:10:45.147 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-16 12:38:44.030179] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.147 [2024-04-16 12:38:44.105570] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:45.147 [2024-04-16 12:38:44.129588] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.147 [2024-04-16 12:38:44.134720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.147 passed 00:10:45.405 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-16 12:38:44.219519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.405 [2024-04-16 12:38:44.220841] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:45.405 [2024-04-16 12:38:44.220899] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:45.405 [2024-04-16 12:38:44.222578] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.405 passed 00:10:45.405 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-16 12:38:44.305719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.405 [2024-04-16 12:38:44.397573] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:45.405 [2024-04-16 12:38:44.405572] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:45.405 [2024-04-16 12:38:44.413576] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:45.405 [2024-04-16 12:38:44.421576] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:45.405 [2024-04-16 12:38:44.450691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.662 passed 00:10:45.662 Test: admin_create_io_sq_verify_pc ...[2024-04-16 12:38:44.537211] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.662 [2024-04-16 12:38:44.552602] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:45.662 [2024-04-16 12:38:44.570612] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.662 passed 00:10:45.662 Test: admin_create_io_qp_max_qps ...[2024-04-16 12:38:44.653192] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:47.035 [2024-04-16 12:38:45.742596] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:47.292 [2024-04-16 12:38:46.131107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.292 passed 00:10:47.292 Test: admin_create_io_sq_shared_cq ...[2024-04-16 12:38:46.215148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:47.292 [2024-04-16 12:38:46.347572] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:47.550 [2024-04-16 12:38:46.384649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.550 passed 00:10:47.550 00:10:47.550 Run Summary: Type Total Ran Passed Failed Inactive 00:10:47.550 suites 1 1 n/a 0 0 00:10:47.550 tests 18 18 18 0 0 00:10:47.550 asserts 360 360 360 0 n/a 00:10:47.550 00:10:47.550 Elapsed time = 1.561 seconds 00:10:47.550 12:38:46 -- compliance/compliance.sh@42 -- # killprocess 1131324 00:10:47.550 12:38:46 -- common/autotest_common.sh@936 -- # '[' -z 1131324 ']' 00:10:47.550 12:38:46 -- common/autotest_common.sh@940 -- # kill -0 1131324 00:10:47.550 12:38:46 -- common/autotest_common.sh@941 -- # uname 00:10:47.550 12:38:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:47.550 12:38:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1131324 00:10:47.550 12:38:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:47.550 12:38:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:47.550 12:38:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1131324' 00:10:47.550 killing process with pid 1131324 00:10:47.550 12:38:46 -- common/autotest_common.sh@955 -- # kill 1131324 00:10:47.550 12:38:46 -- common/autotest_common.sh@960 -- # wait 1131324 00:10:47.809 12:38:46 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:47.809 12:38:46 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:47.809 00:10:47.809 real 0m5.787s 00:10:47.809 user 0m16.171s 00:10:47.809 sys 0m0.566s 00:10:47.809 12:38:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.809 12:38:46 -- common/autotest_common.sh@10 -- # set +x 00:10:47.809 ************************************ 00:10:47.809 END TEST nvmf_vfio_user_nvme_compliance 00:10:47.809 ************************************ 00:10:47.809 12:38:46 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:47.809 12:38:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:47.809 12:38:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.809 12:38:46 -- common/autotest_common.sh@10 -- # set +x 00:10:47.809 ************************************ 00:10:47.809 START TEST nvmf_vfio_user_fuzz 00:10:47.809 ************************************ 00:10:47.809 12:38:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:48.069 * Looking for test storage... 00:10:48.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.069 12:38:46 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.069 12:38:46 -- nvmf/common.sh@7 -- # uname -s 00:10:48.069 12:38:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.069 12:38:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.069 12:38:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.069 12:38:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.070 12:38:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.070 12:38:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.070 12:38:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.070 12:38:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.070 12:38:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.070 12:38:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.070 12:38:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:48.070 12:38:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:48.070 12:38:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.070 12:38:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.070 12:38:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.070 12:38:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.070 12:38:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.070 12:38:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.070 12:38:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.070 12:38:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.070 12:38:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.070 12:38:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.070 12:38:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.070 12:38:46 -- paths/export.sh@5 -- # export PATH 00:10:48.070 12:38:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.070 12:38:46 -- nvmf/common.sh@47 -- # : 0 00:10:48.070 12:38:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.070 12:38:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.070 12:38:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.070 12:38:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.070 12:38:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.070 12:38:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.070 12:38:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.070 12:38:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1132055 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1132055' 00:10:48.070 Process pid: 1132055 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:48.070 12:38:46 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1132055 00:10:48.070 12:38:46 -- common/autotest_common.sh@817 -- # '[' -z 1132055 ']' 00:10:48.070 12:38:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.070 12:38:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:48.070 12:38:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.070 12:38:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:48.070 12:38:46 -- common/autotest_common.sh@10 -- # set +x 00:10:49.034 12:38:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:49.034 12:38:47 -- common/autotest_common.sh@850 -- # return 0 00:10:49.034 12:38:47 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:49.967 12:38:48 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:49.967 12:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.967 12:38:48 -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 12:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.967 12:38:48 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:49.967 12:38:48 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:49.967 12:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.967 12:38:48 -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 malloc0 00:10:49.967 12:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.967 12:38:49 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:49.967 12:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.967 12:38:49 -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 12:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.967 12:38:49 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:49.967 12:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.967 12:38:49 -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 12:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.967 12:38:49 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:49.967 12:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.967 12:38:49 -- common/autotest_common.sh@10 -- # set +x 00:10:49.967 12:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.967 12:38:49 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:49.967 12:38:49 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:22.054 Fuzzing completed. Shutting down the fuzz application 00:11:22.054 00:11:22.054 Dumping successful admin opcodes: 00:11:22.054 8, 9, 10, 24, 00:11:22.054 Dumping successful io opcodes: 00:11:22.054 0, 00:11:22.054 NS: 0x200003a1ef00 I/O qp, Total commands completed: 569535, total successful commands: 2191, random_seed: 1477790720 00:11:22.054 NS: 0x200003a1ef00 admin qp, Total commands completed: 83221, total successful commands: 665, random_seed: 3944301056 00:11:22.054 12:39:19 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:22.054 12:39:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.054 12:39:19 -- common/autotest_common.sh@10 -- # set +x 00:11:22.054 12:39:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.054 12:39:19 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1132055 00:11:22.054 12:39:19 -- common/autotest_common.sh@936 -- # '[' -z 1132055 ']' 00:11:22.054 12:39:19 -- common/autotest_common.sh@940 -- # kill -0 1132055 00:11:22.054 12:39:19 -- common/autotest_common.sh@941 -- # uname 00:11:22.054 12:39:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:22.054 12:39:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1132055 00:11:22.054 12:39:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:22.054 12:39:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:22.054 12:39:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1132055' 00:11:22.054 killing process with pid 1132055 00:11:22.054 12:39:19 -- common/autotest_common.sh@955 -- # kill 1132055 00:11:22.054 12:39:19 -- common/autotest_common.sh@960 -- # wait 1132055 00:11:22.054 12:39:19 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:22.054 12:39:19 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:22.054 00:11:22.054 real 0m33.046s 00:11:22.054 user 0m32.096s 00:11:22.054 sys 0m28.772s 00:11:22.054 12:39:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.054 12:39:19 -- common/autotest_common.sh@10 -- # set +x 00:11:22.054 ************************************ 00:11:22.054 END TEST nvmf_vfio_user_fuzz 00:11:22.054 ************************************ 00:11:22.054 12:39:19 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:22.054 12:39:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:22.054 12:39:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.054 12:39:19 -- common/autotest_common.sh@10 -- # set +x 00:11:22.054 ************************************ 00:11:22.054 START TEST nvmf_host_management 00:11:22.054 ************************************ 00:11:22.054 12:39:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:22.054 * Looking for test storage... 00:11:22.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.054 12:39:20 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.054 12:39:20 -- nvmf/common.sh@7 -- # uname -s 00:11:22.054 12:39:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.054 12:39:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.054 12:39:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.054 12:39:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.054 12:39:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.054 12:39:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.054 12:39:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.054 12:39:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.054 12:39:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.054 12:39:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.054 12:39:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:22.054 12:39:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:22.054 12:39:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.054 12:39:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.054 12:39:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.054 12:39:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.054 12:39:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.054 12:39:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.054 12:39:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.054 12:39:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.054 12:39:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.055 12:39:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.055 12:39:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.055 12:39:20 -- paths/export.sh@5 -- # export PATH 00:11:22.055 12:39:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.055 12:39:20 -- nvmf/common.sh@47 -- # : 0 00:11:22.055 12:39:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.055 12:39:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.055 12:39:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.055 12:39:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.055 12:39:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.055 12:39:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.055 12:39:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.055 12:39:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.055 12:39:20 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.055 12:39:20 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.055 12:39:20 -- target/host_management.sh@105 -- # nvmftestinit 00:11:22.055 12:39:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:22.055 12:39:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.055 12:39:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:22.055 12:39:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:22.055 12:39:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:22.055 12:39:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.055 12:39:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.055 12:39:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.055 12:39:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:22.055 12:39:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:22.055 12:39:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:22.055 12:39:20 -- common/autotest_common.sh@10 -- # set +x 00:11:23.956 12:39:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:23.956 12:39:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:23.956 12:39:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:23.956 12:39:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:23.956 12:39:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:23.956 12:39:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:23.956 12:39:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:23.956 12:39:22 -- nvmf/common.sh@295 -- # net_devs=() 00:11:23.956 12:39:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:23.956 12:39:22 -- nvmf/common.sh@296 -- # e810=() 00:11:23.956 12:39:22 -- nvmf/common.sh@296 -- # local -ga e810 00:11:23.956 12:39:22 -- nvmf/common.sh@297 -- # x722=() 00:11:23.956 12:39:22 -- nvmf/common.sh@297 -- # local -ga x722 00:11:23.956 12:39:22 -- nvmf/common.sh@298 -- # mlx=() 00:11:23.956 12:39:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:23.956 12:39:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.956 12:39:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:23.956 12:39:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:23.956 12:39:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:23.956 12:39:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.956 12:39:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:23.956 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:23.956 12:39:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.956 12:39:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:23.956 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:23.956 12:39:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:23.956 12:39:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.956 12:39:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.956 12:39:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:23.956 12:39:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.956 12:39:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:23.956 Found net devices under 0000:82:00.0: cvl_0_0 00:11:23.956 12:39:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.956 12:39:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.956 12:39:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.956 12:39:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:23.956 12:39:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.956 12:39:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:23.956 Found net devices under 0000:82:00.1: cvl_0_1 00:11:23.956 12:39:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.956 12:39:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:23.956 12:39:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:23.956 12:39:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:23.956 12:39:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:23.956 12:39:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.956 12:39:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.956 12:39:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.956 12:39:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:23.956 12:39:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.957 12:39:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.957 12:39:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:23.957 12:39:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.957 12:39:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.957 12:39:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:23.957 12:39:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:23.957 12:39:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.957 12:39:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.957 12:39:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.957 12:39:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.957 12:39:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.957 12:39:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.957 12:39:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.957 12:39:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.957 12:39:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:11:23.957 00:11:23.957 --- 10.0.0.2 ping statistics --- 00:11:23.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.957 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:11:23.957 12:39:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:11:23.957 00:11:23.957 --- 10.0.0.1 ping statistics --- 00:11:23.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.957 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:11:23.957 12:39:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.957 12:39:22 -- nvmf/common.sh@411 -- # return 0 00:11:23.957 12:39:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:23.957 12:39:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.957 12:39:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:23.957 12:39:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:23.957 12:39:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.957 12:39:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:23.957 12:39:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:23.957 12:39:22 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:11:23.957 12:39:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.957 12:39:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.957 12:39:22 -- common/autotest_common.sh@10 -- # set +x 00:11:23.957 ************************************ 00:11:23.957 START TEST nvmf_host_management 00:11:23.957 ************************************ 00:11:23.957 12:39:22 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:11:23.957 12:39:22 -- target/host_management.sh@69 -- # starttarget 00:11:23.957 12:39:22 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:23.957 12:39:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:23.957 12:39:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:23.957 12:39:22 -- common/autotest_common.sh@10 -- # set +x 00:11:23.957 12:39:22 -- nvmf/common.sh@470 -- # nvmfpid=1138584 00:11:23.957 12:39:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:23.957 12:39:22 -- nvmf/common.sh@471 -- # waitforlisten 1138584 00:11:23.957 12:39:22 -- common/autotest_common.sh@817 -- # '[' -z 1138584 ']' 00:11:23.957 12:39:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.957 12:39:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:23.957 12:39:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.957 12:39:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:23.957 12:39:22 -- common/autotest_common.sh@10 -- # set +x 00:11:23.957 [2024-04-16 12:39:22.836078] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:11:23.957 [2024-04-16 12:39:22.836159] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.957 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.957 [2024-04-16 12:39:22.910335] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.957 [2024-04-16 12:39:23.019620] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.957 [2024-04-16 12:39:23.019679] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.957 [2024-04-16 12:39:23.019692] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.957 [2024-04-16 12:39:23.019703] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.957 [2024-04-16 12:39:23.019713] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.957 [2024-04-16 12:39:23.019799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.957 [2024-04-16 12:39:23.019861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.957 [2024-04-16 12:39:23.019905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:23.957 [2024-04-16 12:39:23.019908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.890 12:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:24.890 12:39:23 -- common/autotest_common.sh@850 -- # return 0 00:11:24.890 12:39:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:24.890 12:39:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:24.890 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.890 12:39:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.890 12:39:23 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.890 12:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.890 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.890 [2024-04-16 12:39:23.790196] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.890 12:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.890 12:39:23 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:24.890 12:39:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:24.890 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.890 12:39:23 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:24.890 12:39:23 -- target/host_management.sh@23 -- # cat 00:11:24.890 12:39:23 -- target/host_management.sh@30 -- # rpc_cmd 00:11:24.890 12:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.890 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.890 Malloc0 00:11:24.890 [2024-04-16 12:39:23.849049] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.890 12:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.890 12:39:23 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:24.890 12:39:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:24.890 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.890 12:39:23 -- target/host_management.sh@73 -- # perfpid=1138757 00:11:24.890 12:39:23 -- target/host_management.sh@74 -- # waitforlisten 1138757 /var/tmp/bdevperf.sock 00:11:24.890 12:39:23 -- common/autotest_common.sh@817 -- # '[' -z 1138757 ']' 00:11:24.890 12:39:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:24.890 12:39:23 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:24.890 12:39:23 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:24.890 12:39:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:24.890 12:39:23 -- nvmf/common.sh@521 -- # config=() 00:11:24.890 12:39:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:24.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:24.890 12:39:23 -- nvmf/common.sh@521 -- # local subsystem config 00:11:24.890 12:39:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:24.890 12:39:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:24.890 12:39:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.890 12:39:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:24.890 { 00:11:24.890 "params": { 00:11:24.890 "name": "Nvme$subsystem", 00:11:24.890 "trtype": "$TEST_TRANSPORT", 00:11:24.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.890 "adrfam": "ipv4", 00:11:24.890 "trsvcid": "$NVMF_PORT", 00:11:24.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.890 "hdgst": ${hdgst:-false}, 00:11:24.890 "ddgst": ${ddgst:-false} 00:11:24.890 }, 00:11:24.890 "method": "bdev_nvme_attach_controller" 00:11:24.890 } 00:11:24.890 EOF 00:11:24.890 )") 00:11:24.890 12:39:23 -- nvmf/common.sh@543 -- # cat 00:11:24.890 12:39:23 -- nvmf/common.sh@545 -- # jq . 00:11:24.890 12:39:23 -- nvmf/common.sh@546 -- # IFS=, 00:11:24.890 12:39:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:24.890 "params": { 00:11:24.890 "name": "Nvme0", 00:11:24.890 "trtype": "tcp", 00:11:24.890 "traddr": "10.0.0.2", 00:11:24.890 "adrfam": "ipv4", 00:11:24.890 "trsvcid": "4420", 00:11:24.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:24.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:24.890 "hdgst": false, 00:11:24.890 "ddgst": false 00:11:24.890 }, 00:11:24.890 "method": "bdev_nvme_attach_controller" 00:11:24.890 }' 00:11:24.890 [2024-04-16 12:39:23.919521] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:11:24.890 [2024-04-16 12:39:23.919625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138757 ] 00:11:24.890 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.148 [2024-04-16 12:39:23.990525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.148 [2024-04-16 12:39:24.098297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.406 Running I/O for 10 seconds... 00:11:25.981 12:39:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:25.981 12:39:24 -- common/autotest_common.sh@850 -- # return 0 00:11:25.981 12:39:24 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:25.981 12:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.981 12:39:24 -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 12:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.981 12:39:24 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:25.981 12:39:24 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:25.981 12:39:24 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:25.981 12:39:24 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:25.981 12:39:24 -- target/host_management.sh@52 -- # local ret=1 00:11:25.981 12:39:24 -- target/host_management.sh@53 -- # local i 00:11:25.981 12:39:24 -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:25.981 12:39:24 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:25.981 12:39:24 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:25.981 12:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.981 12:39:24 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:25.981 12:39:24 -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 12:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.981 12:39:24 -- target/host_management.sh@55 -- # read_io_count=707 00:11:25.981 12:39:24 -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:11:25.981 12:39:24 -- target/host_management.sh@59 -- # ret=0 00:11:25.981 12:39:24 -- target/host_management.sh@60 -- # break 00:11:25.981 12:39:24 -- target/host_management.sh@64 -- # return 0 00:11:25.981 12:39:24 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:25.981 12:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.981 12:39:24 -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 [2024-04-16 12:39:24.928661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2085cb0 is same with the state(5) to be set 00:11:25.981 [2024-04-16 12:39:24.928880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.928919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.928956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.928972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.928989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.981 [2024-04-16 12:39:24.929348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.981 [2024-04-16 12:39:24.929362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.929976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.929994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.982 [2024-04-16 12:39:24.930513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.982 [2024-04-16 12:39:24.930528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.983 [2024-04-16 12:39:24.930883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.983 [2024-04-16 12:39:24.930973] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc79330 was disconnected and freed. reset controller. 00:11:25.983 [2024-04-16 12:39:24.932105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:25.983 12:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.983 12:39:24 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:25.983 12:39:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.983 12:39:24 -- common/autotest_common.sh@10 -- # set +x 00:11:25.983 task offset: 99072 on job bdev=Nvme0n1 fails 00:11:25.983 00:11:25.983 Latency(us) 00:11:25.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.983 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:25.983 Job: Nvme0n1 ended in about 0.65 seconds with error 00:11:25.983 Verification LBA range: start 0x0 length 0x400 00:11:25.983 Nvme0n1 : 0.65 1186.18 74.14 98.85 0.00 48797.87 2669.99 41360.50 00:11:25.983 =================================================================================================================== 00:11:25.983 Total : 1186.18 74.14 98.85 0.00 48797.87 2669.99 41360.50 00:11:25.983 [2024-04-16 12:39:24.934065] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:25.983 [2024-04-16 12:39:24.934105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x868f00 (9): Bad file descriptor 00:11:25.983 12:39:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.983 12:39:24 -- target/host_management.sh@87 -- # sleep 1 00:11:25.983 [2024-04-16 12:39:24.994856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:26.916 12:39:25 -- target/host_management.sh@91 -- # kill -9 1138757 00:11:26.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1138757) - No such process 00:11:26.916 12:39:25 -- target/host_management.sh@91 -- # true 00:11:26.916 12:39:25 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:26.916 12:39:25 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:26.916 12:39:25 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:26.916 12:39:25 -- nvmf/common.sh@521 -- # config=() 00:11:26.916 12:39:25 -- nvmf/common.sh@521 -- # local subsystem config 00:11:26.916 12:39:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:26.916 12:39:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:26.916 { 00:11:26.916 "params": { 00:11:26.916 "name": "Nvme$subsystem", 00:11:26.916 "trtype": "$TEST_TRANSPORT", 00:11:26.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:26.916 "adrfam": "ipv4", 00:11:26.916 "trsvcid": "$NVMF_PORT", 00:11:26.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:26.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:26.916 "hdgst": ${hdgst:-false}, 00:11:26.916 "ddgst": ${ddgst:-false} 00:11:26.916 }, 00:11:26.916 "method": "bdev_nvme_attach_controller" 00:11:26.916 } 00:11:26.916 EOF 00:11:26.916 )") 00:11:26.916 12:39:25 -- nvmf/common.sh@543 -- # cat 00:11:26.916 12:39:25 -- nvmf/common.sh@545 -- # jq . 00:11:26.916 12:39:25 -- nvmf/common.sh@546 -- # IFS=, 00:11:26.916 12:39:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:26.916 "params": { 00:11:26.916 "name": "Nvme0", 00:11:26.916 "trtype": "tcp", 00:11:26.916 "traddr": "10.0.0.2", 00:11:26.916 "adrfam": "ipv4", 00:11:26.916 "trsvcid": "4420", 00:11:26.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:26.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:26.916 "hdgst": false, 00:11:26.916 "ddgst": false 00:11:26.916 }, 00:11:26.916 "method": "bdev_nvme_attach_controller" 00:11:26.916 }' 00:11:27.173 [2024-04-16 12:39:25.989033] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:11:27.173 [2024-04-16 12:39:25.989104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139034 ] 00:11:27.173 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.173 [2024-04-16 12:39:26.057810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.173 [2024-04-16 12:39:26.167340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.173 [2024-04-16 12:39:26.176153] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:27.454 Running I/O for 1 seconds... 00:11:28.388 00:11:28.388 Latency(us) 00:11:28.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.388 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:28.388 Verification LBA range: start 0x0 length 0x400 00:11:28.388 Nvme0n1 : 1.05 1405.42 87.84 0.00 0.00 44862.86 11553.75 44079.03 00:11:28.388 =================================================================================================================== 00:11:28.388 Total : 1405.42 87.84 0.00 0.00 44862.86 11553.75 44079.03 00:11:28.646 12:39:27 -- target/host_management.sh@102 -- # stoptarget 00:11:28.646 12:39:27 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:28.646 12:39:27 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:28.646 12:39:27 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:28.646 12:39:27 -- target/host_management.sh@40 -- # nvmftestfini 00:11:28.646 12:39:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:28.646 12:39:27 -- nvmf/common.sh@117 -- # sync 00:11:28.646 12:39:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.646 12:39:27 -- nvmf/common.sh@120 -- # set +e 00:11:28.646 12:39:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.646 12:39:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.646 rmmod nvme_tcp 00:11:28.646 rmmod nvme_fabrics 00:11:28.646 rmmod nvme_keyring 00:11:28.905 12:39:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.905 12:39:27 -- nvmf/common.sh@124 -- # set -e 00:11:28.905 12:39:27 -- nvmf/common.sh@125 -- # return 0 00:11:28.905 12:39:27 -- nvmf/common.sh@478 -- # '[' -n 1138584 ']' 00:11:28.905 12:39:27 -- nvmf/common.sh@479 -- # killprocess 1138584 00:11:28.905 12:39:27 -- common/autotest_common.sh@936 -- # '[' -z 1138584 ']' 00:11:28.905 12:39:27 -- common/autotest_common.sh@940 -- # kill -0 1138584 00:11:28.905 12:39:27 -- common/autotest_common.sh@941 -- # uname 00:11:28.905 12:39:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.905 12:39:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1138584 00:11:28.905 12:39:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:28.905 12:39:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:28.905 12:39:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1138584' 00:11:28.905 killing process with pid 1138584 00:11:28.905 12:39:27 -- common/autotest_common.sh@955 -- # kill 1138584 00:11:28.905 12:39:27 -- common/autotest_common.sh@960 -- # wait 1138584 00:11:29.163 [2024-04-16 12:39:28.026143] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:29.163 12:39:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:29.163 12:39:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:29.163 12:39:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:29.163 12:39:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.163 12:39:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.163 12:39:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.163 12:39:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.163 12:39:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.068 12:39:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.068 00:11:31.068 real 0m7.305s 00:11:31.068 user 0m22.588s 00:11:31.068 sys 0m1.448s 00:11:31.068 12:39:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.068 12:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:31.068 ************************************ 00:11:31.068 END TEST nvmf_host_management 00:11:31.068 ************************************ 00:11:31.068 12:39:30 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:31.068 00:11:31.068 real 0m10.070s 00:11:31.068 user 0m23.572s 00:11:31.068 sys 0m3.263s 00:11:31.068 12:39:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.068 12:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:31.068 ************************************ 00:11:31.068 END TEST nvmf_host_management 00:11:31.068 ************************************ 00:11:31.326 12:39:30 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:31.326 12:39:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:31.326 12:39:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.326 12:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:31.326 ************************************ 00:11:31.326 START TEST nvmf_lvol 00:11:31.326 ************************************ 00:11:31.326 12:39:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:31.326 * Looking for test storage... 00:11:31.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.326 12:39:30 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.326 12:39:30 -- nvmf/common.sh@7 -- # uname -s 00:11:31.326 12:39:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.326 12:39:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.326 12:39:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.326 12:39:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.326 12:39:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.326 12:39:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.326 12:39:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.326 12:39:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.326 12:39:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.326 12:39:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.326 12:39:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:31.326 12:39:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:31.326 12:39:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.326 12:39:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.326 12:39:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.326 12:39:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.326 12:39:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.326 12:39:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.326 12:39:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.326 12:39:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.326 12:39:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.327 12:39:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.327 12:39:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.327 12:39:30 -- paths/export.sh@5 -- # export PATH 00:11:31.327 12:39:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.327 12:39:30 -- nvmf/common.sh@47 -- # : 0 00:11:31.327 12:39:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.327 12:39:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.327 12:39:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.327 12:39:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.327 12:39:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.327 12:39:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.327 12:39:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.327 12:39:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.327 12:39:30 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.327 12:39:30 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.327 12:39:30 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:31.327 12:39:30 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:31.327 12:39:30 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.327 12:39:30 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:31.327 12:39:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:31.327 12:39:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.327 12:39:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:31.327 12:39:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:31.327 12:39:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:31.327 12:39:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.327 12:39:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.327 12:39:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.327 12:39:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:31.327 12:39:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:31.327 12:39:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.327 12:39:30 -- common/autotest_common.sh@10 -- # set +x 00:11:33.858 12:39:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:33.858 12:39:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.858 12:39:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.858 12:39:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.858 12:39:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.858 12:39:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.858 12:39:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.858 12:39:32 -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.858 12:39:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.858 12:39:32 -- nvmf/common.sh@296 -- # e810=() 00:11:33.858 12:39:32 -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.858 12:39:32 -- nvmf/common.sh@297 -- # x722=() 00:11:33.858 12:39:32 -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.858 12:39:32 -- nvmf/common.sh@298 -- # mlx=() 00:11:33.858 12:39:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.858 12:39:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.858 12:39:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.858 12:39:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.858 12:39:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.858 12:39:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.858 12:39:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:33.858 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:33.858 12:39:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.858 12:39:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:33.858 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:33.858 12:39:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.858 12:39:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.858 12:39:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.858 12:39:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:33.858 12:39:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.858 12:39:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:33.858 Found net devices under 0000:82:00.0: cvl_0_0 00:11:33.858 12:39:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.858 12:39:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.858 12:39:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.858 12:39:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:33.858 12:39:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.858 12:39:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:33.858 Found net devices under 0000:82:00.1: cvl_0_1 00:11:33.858 12:39:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.858 12:39:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:33.858 12:39:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:33.858 12:39:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:33.858 12:39:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:33.858 12:39:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.858 12:39:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.858 12:39:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.858 12:39:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.858 12:39:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.858 12:39:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.858 12:39:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.858 12:39:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.858 12:39:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.858 12:39:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.858 12:39:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.858 12:39:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.858 12:39:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.858 12:39:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.858 12:39:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.858 12:39:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.858 12:39:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.858 12:39:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.858 12:39:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.858 12:39:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:11:33.858 00:11:33.858 --- 10.0.0.2 ping statistics --- 00:11:33.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.858 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:11:33.858 12:39:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:11:33.858 00:11:33.858 --- 10.0.0.1 ping statistics --- 00:11:33.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.859 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:33.859 12:39:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.859 12:39:32 -- nvmf/common.sh@411 -- # return 0 00:11:33.859 12:39:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:33.859 12:39:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.859 12:39:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:33.859 12:39:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:33.859 12:39:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.859 12:39:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:33.859 12:39:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:34.117 12:39:32 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:34.117 12:39:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:34.117 12:39:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:34.117 12:39:32 -- common/autotest_common.sh@10 -- # set +x 00:11:34.117 12:39:32 -- nvmf/common.sh@470 -- # nvmfpid=1141549 00:11:34.117 12:39:32 -- nvmf/common.sh@471 -- # waitforlisten 1141549 00:11:34.117 12:39:32 -- common/autotest_common.sh@817 -- # '[' -z 1141549 ']' 00:11:34.117 12:39:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:34.117 12:39:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.117 12:39:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:34.117 12:39:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.117 12:39:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:34.117 12:39:32 -- common/autotest_common.sh@10 -- # set +x 00:11:34.117 [2024-04-16 12:39:32.976607] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:11:34.117 [2024-04-16 12:39:32.976694] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.117 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.117 [2024-04-16 12:39:33.056887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:34.117 [2024-04-16 12:39:33.171165] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.117 [2024-04-16 12:39:33.171231] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.117 [2024-04-16 12:39:33.171249] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.117 [2024-04-16 12:39:33.171264] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.117 [2024-04-16 12:39:33.171275] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.117 [2024-04-16 12:39:33.171338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.117 [2024-04-16 12:39:33.171396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.117 [2024-04-16 12:39:33.171393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.051 12:39:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:35.051 12:39:33 -- common/autotest_common.sh@850 -- # return 0 00:11:35.051 12:39:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:35.051 12:39:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:35.051 12:39:33 -- common/autotest_common.sh@10 -- # set +x 00:11:35.051 12:39:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.051 12:39:33 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:35.309 [2024-04-16 12:39:34.138495] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.309 12:39:34 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.567 12:39:34 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:35.567 12:39:34 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.825 12:39:34 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:35.825 12:39:34 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:36.084 12:39:34 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:36.342 12:39:35 -- target/nvmf_lvol.sh@29 -- # lvs=65a56cb4-2e7d-4098-a33c-73e757fcb59a 00:11:36.342 12:39:35 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 65a56cb4-2e7d-4098-a33c-73e757fcb59a lvol 20 00:11:36.599 12:39:35 -- target/nvmf_lvol.sh@32 -- # lvol=f04b0ea2-a4dd-4f91-b733-ef51810102fe 00:11:36.599 12:39:35 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:36.857 12:39:35 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f04b0ea2-a4dd-4f91-b733-ef51810102fe 00:11:37.115 12:39:35 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:37.115 [2024-04-16 12:39:36.146628] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.115 12:39:36 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:37.371 12:39:36 -- target/nvmf_lvol.sh@42 -- # perf_pid=1141989 00:11:37.371 12:39:36 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:37.371 12:39:36 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:37.628 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.562 12:39:37 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f04b0ea2-a4dd-4f91-b733-ef51810102fe MY_SNAPSHOT 00:11:38.820 12:39:37 -- target/nvmf_lvol.sh@47 -- # snapshot=37e22373-b4cc-4c3e-bcdf-74f1396fd0f1 00:11:38.820 12:39:37 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f04b0ea2-a4dd-4f91-b733-ef51810102fe 30 00:11:39.078 12:39:38 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 37e22373-b4cc-4c3e-bcdf-74f1396fd0f1 MY_CLONE 00:11:39.644 12:39:38 -- target/nvmf_lvol.sh@49 -- # clone=0963e47f-e67f-4481-be13-6a526df88bb8 00:11:39.644 12:39:38 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0963e47f-e67f-4481-be13-6a526df88bb8 00:11:40.209 12:39:38 -- target/nvmf_lvol.sh@53 -- # wait 1141989 00:11:48.314 Initializing NVMe Controllers 00:11:48.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:48.314 Controller IO queue size 128, less than required. 00:11:48.314 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:48.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:48.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:48.314 Initialization complete. Launching workers. 00:11:48.314 ======================================================== 00:11:48.314 Latency(us) 00:11:48.314 Device Information : IOPS MiB/s Average min max 00:11:48.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10410.30 40.67 12303.88 2042.65 72445.19 00:11:48.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10369.50 40.51 12351.12 2169.76 65876.16 00:11:48.314 ======================================================== 00:11:48.314 Total : 20779.80 81.17 12327.45 2042.65 72445.19 00:11:48.314 00:11:48.314 12:39:46 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:48.314 12:39:47 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f04b0ea2-a4dd-4f91-b733-ef51810102fe 00:11:48.571 12:39:47 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65a56cb4-2e7d-4098-a33c-73e757fcb59a 00:11:48.571 12:39:47 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:48.571 12:39:47 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:48.571 12:39:47 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:48.571 12:39:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:48.571 12:39:47 -- nvmf/common.sh@117 -- # sync 00:11:48.571 12:39:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.571 12:39:47 -- nvmf/common.sh@120 -- # set +e 00:11:48.571 12:39:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.571 12:39:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.571 rmmod nvme_tcp 00:11:48.828 rmmod nvme_fabrics 00:11:48.828 rmmod nvme_keyring 00:11:48.828 12:39:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.828 12:39:47 -- nvmf/common.sh@124 -- # set -e 00:11:48.828 12:39:47 -- nvmf/common.sh@125 -- # return 0 00:11:48.828 12:39:47 -- nvmf/common.sh@478 -- # '[' -n 1141549 ']' 00:11:48.828 12:39:47 -- nvmf/common.sh@479 -- # killprocess 1141549 00:11:48.828 12:39:47 -- common/autotest_common.sh@936 -- # '[' -z 1141549 ']' 00:11:48.828 12:39:47 -- common/autotest_common.sh@940 -- # kill -0 1141549 00:11:48.828 12:39:47 -- common/autotest_common.sh@941 -- # uname 00:11:48.828 12:39:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.828 12:39:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1141549 00:11:48.828 12:39:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:48.828 12:39:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:48.828 12:39:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1141549' 00:11:48.828 killing process with pid 1141549 00:11:48.828 12:39:47 -- common/autotest_common.sh@955 -- # kill 1141549 00:11:48.828 12:39:47 -- common/autotest_common.sh@960 -- # wait 1141549 00:11:49.086 12:39:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:49.086 12:39:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:49.086 12:39:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:49.086 12:39:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.086 12:39:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.086 12:39:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.086 12:39:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.086 12:39:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.620 12:39:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.620 00:11:51.620 real 0m19.841s 00:11:51.620 user 1m6.276s 00:11:51.620 sys 0m6.003s 00:11:51.620 12:39:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:51.620 12:39:50 -- common/autotest_common.sh@10 -- # set +x 00:11:51.620 ************************************ 00:11:51.620 END TEST nvmf_lvol 00:11:51.620 ************************************ 00:11:51.620 12:39:50 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:51.620 12:39:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:51.620 12:39:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.620 12:39:50 -- common/autotest_common.sh@10 -- # set +x 00:11:51.620 ************************************ 00:11:51.620 START TEST nvmf_lvs_grow 00:11:51.620 ************************************ 00:11:51.620 12:39:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:51.620 * Looking for test storage... 00:11:51.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.620 12:39:50 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.620 12:39:50 -- nvmf/common.sh@7 -- # uname -s 00:11:51.620 12:39:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.620 12:39:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.620 12:39:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.620 12:39:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.620 12:39:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.620 12:39:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.620 12:39:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.620 12:39:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.620 12:39:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.620 12:39:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.620 12:39:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:51.620 12:39:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:51.620 12:39:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.620 12:39:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.620 12:39:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.620 12:39:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.620 12:39:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.620 12:39:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.620 12:39:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.620 12:39:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.620 12:39:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.620 12:39:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.620 12:39:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.620 12:39:50 -- paths/export.sh@5 -- # export PATH 00:11:51.620 12:39:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.620 12:39:50 -- nvmf/common.sh@47 -- # : 0 00:11:51.620 12:39:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.620 12:39:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.620 12:39:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.620 12:39:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.620 12:39:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.620 12:39:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.620 12:39:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.620 12:39:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.620 12:39:50 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.620 12:39:50 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:51.620 12:39:50 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:11:51.620 12:39:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:51.620 12:39:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.620 12:39:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:51.620 12:39:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:51.620 12:39:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:51.620 12:39:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.620 12:39:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.620 12:39:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.620 12:39:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:51.620 12:39:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:51.620 12:39:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:51.620 12:39:50 -- common/autotest_common.sh@10 -- # set +x 00:11:54.154 12:39:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:54.154 12:39:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.154 12:39:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.154 12:39:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.154 12:39:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.154 12:39:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.154 12:39:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.154 12:39:52 -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.154 12:39:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.154 12:39:52 -- nvmf/common.sh@296 -- # e810=() 00:11:54.154 12:39:52 -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.154 12:39:52 -- nvmf/common.sh@297 -- # x722=() 00:11:54.154 12:39:52 -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.154 12:39:52 -- nvmf/common.sh@298 -- # mlx=() 00:11:54.154 12:39:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.154 12:39:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.154 12:39:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.154 12:39:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:54.154 12:39:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:54.154 12:39:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:54.154 12:39:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:54.155 12:39:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.155 12:39:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.155 12:39:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:11:54.155 Found 0000:82:00.0 (0x8086 - 0x159b) 00:11:54.155 12:39:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.155 12:39:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:11:54.155 Found 0000:82:00.1 (0x8086 - 0x159b) 00:11:54.155 12:39:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.155 12:39:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.155 12:39:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.155 12:39:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:54.155 12:39:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.155 12:39:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:11:54.155 Found net devices under 0000:82:00.0: cvl_0_0 00:11:54.155 12:39:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.155 12:39:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.155 12:39:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.155 12:39:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:54.155 12:39:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.155 12:39:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:11:54.155 Found net devices under 0000:82:00.1: cvl_0_1 00:11:54.155 12:39:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.155 12:39:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:54.155 12:39:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:54.155 12:39:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:54.155 12:39:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.155 12:39:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.155 12:39:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.155 12:39:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:54.155 12:39:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.155 12:39:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.155 12:39:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:54.155 12:39:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.155 12:39:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.155 12:39:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:54.155 12:39:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:54.155 12:39:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.155 12:39:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.155 12:39:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.155 12:39:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.155 12:39:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.155 12:39:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.155 12:39:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.155 12:39:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.155 12:39:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:11:54.155 00:11:54.155 --- 10.0.0.2 ping statistics --- 00:11:54.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.155 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:54.155 12:39:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:11:54.155 00:11:54.155 --- 10.0.0.1 ping statistics --- 00:11:54.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.155 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:54.155 12:39:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.155 12:39:52 -- nvmf/common.sh@411 -- # return 0 00:11:54.155 12:39:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:54.155 12:39:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.155 12:39:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:54.155 12:39:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.155 12:39:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:54.155 12:39:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:54.155 12:39:52 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:11:54.155 12:39:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:54.155 12:39:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:54.155 12:39:52 -- common/autotest_common.sh@10 -- # set +x 00:11:54.155 12:39:52 -- nvmf/common.sh@470 -- # nvmfpid=1145661 00:11:54.155 12:39:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:54.155 12:39:52 -- nvmf/common.sh@471 -- # waitforlisten 1145661 00:11:54.155 12:39:52 -- common/autotest_common.sh@817 -- # '[' -z 1145661 ']' 00:11:54.155 12:39:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.155 12:39:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:54.155 12:39:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.155 12:39:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:54.155 12:39:52 -- common/autotest_common.sh@10 -- # set +x 00:11:54.155 [2024-04-16 12:39:52.936913] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:11:54.155 [2024-04-16 12:39:52.936995] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.155 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.155 [2024-04-16 12:39:53.015870] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.155 [2024-04-16 12:39:53.131278] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.155 [2024-04-16 12:39:53.131343] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.155 [2024-04-16 12:39:53.131357] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.155 [2024-04-16 12:39:53.131368] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.155 [2024-04-16 12:39:53.131378] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.155 [2024-04-16 12:39:53.131425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.413 12:39:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:54.413 12:39:53 -- common/autotest_common.sh@850 -- # return 0 00:11:54.413 12:39:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:54.413 12:39:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:54.413 12:39:53 -- common/autotest_common.sh@10 -- # set +x 00:11:54.413 12:39:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.413 12:39:53 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:54.670 [2024-04-16 12:39:53.538712] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.670 12:39:53 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:11:54.670 12:39:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:54.670 12:39:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.671 12:39:53 -- common/autotest_common.sh@10 -- # set +x 00:11:54.671 ************************************ 00:11:54.671 START TEST lvs_grow_clean 00:11:54.671 ************************************ 00:11:54.671 12:39:53 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:54.671 12:39:53 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:54.928 12:39:53 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:54.928 12:39:53 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:55.186 12:39:54 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f5163629-df97-47cd-b424-109129b23666 00:11:55.186 12:39:54 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:11:55.186 12:39:54 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:55.444 12:39:54 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:55.444 12:39:54 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:55.444 12:39:54 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f5163629-df97-47cd-b424-109129b23666 lvol 150 00:11:55.702 12:39:54 -- target/nvmf_lvs_grow.sh@33 -- # lvol=75217a42-ccab-4f5e-b151-841343867969 00:11:55.702 12:39:54 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:55.702 12:39:54 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:55.960 [2024-04-16 12:39:54.945764] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:55.960 [2024-04-16 12:39:54.945873] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:55.960 true 00:11:55.960 12:39:54 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:11:55.960 12:39:54 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:56.218 12:39:55 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:56.218 12:39:55 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:56.476 12:39:55 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 75217a42-ccab-4f5e-b151-841343867969 00:11:56.733 12:39:55 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:56.991 [2024-04-16 12:39:55.916819] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.991 12:39:55 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.259 12:39:56 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1146102 00:11:57.260 12:39:56 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:57.260 12:39:56 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:57.260 12:39:56 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1146102 /var/tmp/bdevperf.sock 00:11:57.260 12:39:56 -- common/autotest_common.sh@817 -- # '[' -z 1146102 ']' 00:11:57.260 12:39:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:57.260 12:39:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:57.260 12:39:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:57.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:57.260 12:39:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:57.260 12:39:56 -- common/autotest_common.sh@10 -- # set +x 00:11:57.260 [2024-04-16 12:39:56.213627] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:11:57.260 [2024-04-16 12:39:56.213701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146102 ] 00:11:57.260 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.260 [2024-04-16 12:39:56.284119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.517 [2024-04-16 12:39:56.399647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.517 12:39:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:57.517 12:39:56 -- common/autotest_common.sh@850 -- # return 0 00:11:57.517 12:39:56 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:57.775 Nvme0n1 00:11:57.775 12:39:56 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:58.033 [ 00:11:58.033 { 00:11:58.033 "name": "Nvme0n1", 00:11:58.033 "aliases": [ 00:11:58.033 "75217a42-ccab-4f5e-b151-841343867969" 00:11:58.033 ], 00:11:58.033 "product_name": "NVMe disk", 00:11:58.033 "block_size": 4096, 00:11:58.033 "num_blocks": 38912, 00:11:58.033 "uuid": "75217a42-ccab-4f5e-b151-841343867969", 00:11:58.033 "assigned_rate_limits": { 00:11:58.033 "rw_ios_per_sec": 0, 00:11:58.033 "rw_mbytes_per_sec": 0, 00:11:58.033 "r_mbytes_per_sec": 0, 00:11:58.033 "w_mbytes_per_sec": 0 00:11:58.033 }, 00:11:58.033 "claimed": false, 00:11:58.033 "zoned": false, 00:11:58.033 "supported_io_types": { 00:11:58.033 "read": true, 00:11:58.033 "write": true, 00:11:58.033 "unmap": true, 00:11:58.033 "write_zeroes": true, 00:11:58.033 "flush": true, 00:11:58.033 "reset": true, 00:11:58.033 "compare": true, 00:11:58.033 "compare_and_write": true, 00:11:58.033 "abort": true, 00:11:58.033 "nvme_admin": true, 00:11:58.033 "nvme_io": true 00:11:58.033 }, 00:11:58.033 "memory_domains": [ 00:11:58.033 { 00:11:58.033 "dma_device_id": "system", 00:11:58.033 "dma_device_type": 1 00:11:58.033 } 00:11:58.033 ], 00:11:58.033 "driver_specific": { 00:11:58.033 "nvme": [ 00:11:58.033 { 00:11:58.033 "trid": { 00:11:58.033 "trtype": "TCP", 00:11:58.033 "adrfam": "IPv4", 00:11:58.033 "traddr": "10.0.0.2", 00:11:58.033 "trsvcid": "4420", 00:11:58.033 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:58.033 }, 00:11:58.033 "ctrlr_data": { 00:11:58.033 "cntlid": 1, 00:11:58.033 "vendor_id": "0x8086", 00:11:58.033 "model_number": "SPDK bdev Controller", 00:11:58.033 "serial_number": "SPDK0", 00:11:58.033 "firmware_revision": "24.05", 00:11:58.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:58.033 "oacs": { 00:11:58.033 "security": 0, 00:11:58.033 "format": 0, 00:11:58.033 "firmware": 0, 00:11:58.033 "ns_manage": 0 00:11:58.033 }, 00:11:58.033 "multi_ctrlr": true, 00:11:58.033 "ana_reporting": false 00:11:58.033 }, 00:11:58.033 "vs": { 00:11:58.033 "nvme_version": "1.3" 00:11:58.033 }, 00:11:58.033 "ns_data": { 00:11:58.033 "id": 1, 00:11:58.033 "can_share": true 00:11:58.033 } 00:11:58.033 } 00:11:58.033 ], 00:11:58.033 "mp_policy": "active_passive" 00:11:58.033 } 00:11:58.033 } 00:11:58.033 ] 00:11:58.033 12:39:57 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1146138 00:11:58.033 12:39:57 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:58.033 12:39:57 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:58.290 Running I/O for 10 seconds... 00:11:59.225 Latency(us) 00:11:59.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.225 Nvme0n1 : 1.00 14701.00 57.43 0.00 0.00 0.00 0.00 0.00 00:11:59.225 =================================================================================================================== 00:11:59.225 Total : 14701.00 57.43 0.00 0.00 0.00 0.00 0.00 00:11:59.225 00:12:00.159 12:39:59 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f5163629-df97-47cd-b424-109129b23666 00:12:00.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.159 Nvme0n1 : 2.00 14565.50 56.90 0.00 0.00 0.00 0.00 0.00 00:12:00.159 =================================================================================================================== 00:12:00.159 Total : 14565.50 56.90 0.00 0.00 0.00 0.00 0.00 00:12:00.159 00:12:00.417 true 00:12:00.417 12:39:59 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:12:00.417 12:39:59 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:00.675 12:39:59 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:00.675 12:39:59 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:00.675 12:39:59 -- target/nvmf_lvs_grow.sh@65 -- # wait 1146138 00:12:01.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.240 Nvme0n1 : 3.00 14628.33 57.14 0.00 0.00 0.00 0.00 0.00 00:12:01.240 =================================================================================================================== 00:12:01.240 Total : 14628.33 57.14 0.00 0.00 0.00 0.00 0.00 00:12:01.240 00:12:02.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.196 Nvme0n1 : 4.00 14749.75 57.62 0.00 0.00 0.00 0.00 0.00 00:12:02.196 =================================================================================================================== 00:12:02.196 Total : 14749.75 57.62 0.00 0.00 0.00 0.00 0.00 00:12:02.196 00:12:03.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.139 Nvme0n1 : 5.00 14751.80 57.62 0.00 0.00 0.00 0.00 0.00 00:12:03.139 =================================================================================================================== 00:12:03.139 Total : 14751.80 57.62 0.00 0.00 0.00 0.00 0.00 00:12:03.139 00:12:04.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.512 Nvme0n1 : 6.00 14858.17 58.04 0.00 0.00 0.00 0.00 0.00 00:12:04.512 =================================================================================================================== 00:12:04.512 Total : 14858.17 58.04 0.00 0.00 0.00 0.00 0.00 00:12:04.512 00:12:05.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.445 Nvme0n1 : 7.00 14855.57 58.03 0.00 0.00 0.00 0.00 0.00 00:12:05.445 =================================================================================================================== 00:12:05.445 Total : 14855.57 58.03 0.00 0.00 0.00 0.00 0.00 00:12:05.445 00:12:06.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.377 Nvme0n1 : 8.00 14842.50 57.98 0.00 0.00 0.00 0.00 0.00 00:12:06.377 =================================================================================================================== 00:12:06.377 Total : 14842.50 57.98 0.00 0.00 0.00 0.00 0.00 00:12:06.377 00:12:07.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.313 Nvme0n1 : 9.00 14837.78 57.96 0.00 0.00 0.00 0.00 0.00 00:12:07.313 =================================================================================================================== 00:12:07.313 Total : 14837.78 57.96 0.00 0.00 0.00 0.00 0.00 00:12:07.313 00:12:08.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.246 Nvme0n1 : 10.00 14907.10 58.23 0.00 0.00 0.00 0.00 0.00 00:12:08.246 =================================================================================================================== 00:12:08.247 Total : 14907.10 58.23 0.00 0.00 0.00 0.00 0.00 00:12:08.247 00:12:08.247 00:12:08.247 Latency(us) 00:12:08.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.247 Nvme0n1 : 10.00 14906.03 58.23 0.00 0.00 8581.48 4417.61 16893.72 00:12:08.247 =================================================================================================================== 00:12:08.247 Total : 14906.03 58.23 0.00 0.00 8581.48 4417.61 16893.72 00:12:08.247 0 00:12:08.247 12:40:07 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1146102 00:12:08.247 12:40:07 -- common/autotest_common.sh@936 -- # '[' -z 1146102 ']' 00:12:08.247 12:40:07 -- common/autotest_common.sh@940 -- # kill -0 1146102 00:12:08.247 12:40:07 -- common/autotest_common.sh@941 -- # uname 00:12:08.247 12:40:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:08.247 12:40:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1146102 00:12:08.247 12:40:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:08.247 12:40:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:08.247 12:40:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1146102' 00:12:08.247 killing process with pid 1146102 00:12:08.247 12:40:07 -- common/autotest_common.sh@955 -- # kill 1146102 00:12:08.247 Received shutdown signal, test time was about 10.000000 seconds 00:12:08.247 00:12:08.247 Latency(us) 00:12:08.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.247 =================================================================================================================== 00:12:08.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:08.247 12:40:07 -- common/autotest_common.sh@960 -- # wait 1146102 00:12:08.504 12:40:07 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:08.762 12:40:07 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:12:08.762 12:40:07 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:09.021 12:40:08 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:09.021 12:40:08 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:12:09.021 12:40:08 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:09.280 [2024-04-16 12:40:08.261308] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:09.280 12:40:08 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:12:09.280 12:40:08 -- common/autotest_common.sh@638 -- # local es=0 00:12:09.280 12:40:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:12:09.280 12:40:08 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.280 12:40:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.280 12:40:08 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.280 12:40:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.280 12:40:08 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.280 12:40:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.280 12:40:08 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.280 12:40:08 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:09.281 12:40:08 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:12:09.538 request: 00:12:09.538 { 00:12:09.538 "uuid": "f5163629-df97-47cd-b424-109129b23666", 00:12:09.538 "method": "bdev_lvol_get_lvstores", 00:12:09.538 "req_id": 1 00:12:09.538 } 00:12:09.538 Got JSON-RPC error response 00:12:09.538 response: 00:12:09.538 { 00:12:09.538 "code": -19, 00:12:09.538 "message": "No such device" 00:12:09.538 } 00:12:09.538 12:40:08 -- common/autotest_common.sh@641 -- # es=1 00:12:09.538 12:40:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:09.539 12:40:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:09.539 12:40:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:09.539 12:40:08 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:09.796 aio_bdev 00:12:09.796 12:40:08 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 75217a42-ccab-4f5e-b151-841343867969 00:12:09.796 12:40:08 -- common/autotest_common.sh@885 -- # local bdev_name=75217a42-ccab-4f5e-b151-841343867969 00:12:09.796 12:40:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:09.796 12:40:08 -- common/autotest_common.sh@887 -- # local i 00:12:09.797 12:40:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:09.797 12:40:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:09.797 12:40:08 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:10.054 12:40:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 75217a42-ccab-4f5e-b151-841343867969 -t 2000 00:12:10.312 [ 00:12:10.312 { 00:12:10.312 "name": "75217a42-ccab-4f5e-b151-841343867969", 00:12:10.312 "aliases": [ 00:12:10.312 "lvs/lvol" 00:12:10.312 ], 00:12:10.312 "product_name": "Logical Volume", 00:12:10.312 "block_size": 4096, 00:12:10.312 "num_blocks": 38912, 00:12:10.312 "uuid": "75217a42-ccab-4f5e-b151-841343867969", 00:12:10.312 "assigned_rate_limits": { 00:12:10.312 "rw_ios_per_sec": 0, 00:12:10.312 "rw_mbytes_per_sec": 0, 00:12:10.312 "r_mbytes_per_sec": 0, 00:12:10.312 "w_mbytes_per_sec": 0 00:12:10.312 }, 00:12:10.312 "claimed": false, 00:12:10.312 "zoned": false, 00:12:10.312 "supported_io_types": { 00:12:10.312 "read": true, 00:12:10.312 "write": true, 00:12:10.312 "unmap": true, 00:12:10.312 "write_zeroes": true, 00:12:10.312 "flush": false, 00:12:10.312 "reset": true, 00:12:10.312 "compare": false, 00:12:10.312 "compare_and_write": false, 00:12:10.312 "abort": false, 00:12:10.313 "nvme_admin": false, 00:12:10.313 "nvme_io": false 00:12:10.313 }, 00:12:10.313 "driver_specific": { 00:12:10.313 "lvol": { 00:12:10.313 "lvol_store_uuid": "f5163629-df97-47cd-b424-109129b23666", 00:12:10.313 "base_bdev": "aio_bdev", 00:12:10.313 "thin_provision": false, 00:12:10.313 "snapshot": false, 00:12:10.313 "clone": false, 00:12:10.313 "esnap_clone": false 00:12:10.313 } 00:12:10.313 } 00:12:10.313 } 00:12:10.313 ] 00:12:10.313 12:40:09 -- common/autotest_common.sh@893 -- # return 0 00:12:10.313 12:40:09 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:12:10.313 12:40:09 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:10.571 12:40:09 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:10.571 12:40:09 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5163629-df97-47cd-b424-109129b23666 00:12:10.571 12:40:09 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:10.828 12:40:09 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:10.828 12:40:09 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 75217a42-ccab-4f5e-b151-841343867969 00:12:11.086 12:40:09 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5163629-df97-47cd-b424-109129b23666 00:12:11.343 12:40:10 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:11.601 12:40:10 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.601 00:12:11.601 real 0m16.915s 00:12:11.601 user 0m16.327s 00:12:11.601 sys 0m1.889s 00:12:11.601 12:40:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:11.601 12:40:10 -- common/autotest_common.sh@10 -- # set +x 00:12:11.601 ************************************ 00:12:11.601 END TEST lvs_grow_clean 00:12:11.601 ************************************ 00:12:11.601 12:40:10 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:11.601 12:40:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:11.601 12:40:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.601 12:40:10 -- common/autotest_common.sh@10 -- # set +x 00:12:11.859 ************************************ 00:12:11.859 START TEST lvs_grow_dirty 00:12:11.859 ************************************ 00:12:11.859 12:40:10 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.859 12:40:10 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:12.116 12:40:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:12.116 12:40:10 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:12.386 12:40:11 -- target/nvmf_lvs_grow.sh@28 -- # lvs=899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:12.386 12:40:11 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:12.386 12:40:11 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:12.386 12:40:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:12.386 12:40:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:12.386 12:40:11 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 899b5b64-8d2c-459e-bd51-eaacac412d97 lvol 150 00:12:12.951 12:40:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5afa2028-8b80-4015-aa13-c41f8ca46d11 00:12:12.951 12:40:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:12.951 12:40:11 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:12.951 [2024-04-16 12:40:11.973838] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:12.951 [2024-04-16 12:40:11.973942] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:12.951 true 00:12:12.951 12:40:11 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:12.951 12:40:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:13.209 12:40:12 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:13.209 12:40:12 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:13.467 12:40:12 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5afa2028-8b80-4015-aa13-c41f8ca46d11 00:12:13.725 12:40:12 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:13.982 12:40:13 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:14.240 12:40:13 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1148161 00:12:14.240 12:40:13 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:14.240 12:40:13 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.240 12:40:13 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1148161 /var/tmp/bdevperf.sock 00:12:14.240 12:40:13 -- common/autotest_common.sh@817 -- # '[' -z 1148161 ']' 00:12:14.240 12:40:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:14.240 12:40:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:14.241 12:40:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:14.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:14.241 12:40:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:14.241 12:40:13 -- common/autotest_common.sh@10 -- # set +x 00:12:14.241 [2024-04-16 12:40:13.306525] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:14.241 [2024-04-16 12:40:13.306644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148161 ] 00:12:14.499 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.499 [2024-04-16 12:40:13.382352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.499 [2024-04-16 12:40:13.491664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.757 12:40:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:14.757 12:40:13 -- common/autotest_common.sh@850 -- # return 0 00:12:14.757 12:40:13 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:15.014 Nvme0n1 00:12:15.014 12:40:13 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:15.272 [ 00:12:15.272 { 00:12:15.272 "name": "Nvme0n1", 00:12:15.272 "aliases": [ 00:12:15.272 "5afa2028-8b80-4015-aa13-c41f8ca46d11" 00:12:15.272 ], 00:12:15.272 "product_name": "NVMe disk", 00:12:15.272 "block_size": 4096, 00:12:15.272 "num_blocks": 38912, 00:12:15.272 "uuid": "5afa2028-8b80-4015-aa13-c41f8ca46d11", 00:12:15.272 "assigned_rate_limits": { 00:12:15.272 "rw_ios_per_sec": 0, 00:12:15.272 "rw_mbytes_per_sec": 0, 00:12:15.272 "r_mbytes_per_sec": 0, 00:12:15.272 "w_mbytes_per_sec": 0 00:12:15.272 }, 00:12:15.272 "claimed": false, 00:12:15.272 "zoned": false, 00:12:15.272 "supported_io_types": { 00:12:15.272 "read": true, 00:12:15.272 "write": true, 00:12:15.272 "unmap": true, 00:12:15.272 "write_zeroes": true, 00:12:15.272 "flush": true, 00:12:15.272 "reset": true, 00:12:15.272 "compare": true, 00:12:15.272 "compare_and_write": true, 00:12:15.272 "abort": true, 00:12:15.272 "nvme_admin": true, 00:12:15.272 "nvme_io": true 00:12:15.272 }, 00:12:15.272 "memory_domains": [ 00:12:15.272 { 00:12:15.272 "dma_device_id": "system", 00:12:15.272 "dma_device_type": 1 00:12:15.272 } 00:12:15.272 ], 00:12:15.272 "driver_specific": { 00:12:15.272 "nvme": [ 00:12:15.272 { 00:12:15.272 "trid": { 00:12:15.272 "trtype": "TCP", 00:12:15.272 "adrfam": "IPv4", 00:12:15.272 "traddr": "10.0.0.2", 00:12:15.272 "trsvcid": "4420", 00:12:15.272 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:15.272 }, 00:12:15.272 "ctrlr_data": { 00:12:15.272 "cntlid": 1, 00:12:15.272 "vendor_id": "0x8086", 00:12:15.273 "model_number": "SPDK bdev Controller", 00:12:15.273 "serial_number": "SPDK0", 00:12:15.273 "firmware_revision": "24.05", 00:12:15.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:15.273 "oacs": { 00:12:15.273 "security": 0, 00:12:15.273 "format": 0, 00:12:15.273 "firmware": 0, 00:12:15.273 "ns_manage": 0 00:12:15.273 }, 00:12:15.273 "multi_ctrlr": true, 00:12:15.273 "ana_reporting": false 00:12:15.273 }, 00:12:15.273 "vs": { 00:12:15.273 "nvme_version": "1.3" 00:12:15.273 }, 00:12:15.273 "ns_data": { 00:12:15.273 "id": 1, 00:12:15.273 "can_share": true 00:12:15.273 } 00:12:15.273 } 00:12:15.273 ], 00:12:15.273 "mp_policy": "active_passive" 00:12:15.273 } 00:12:15.273 } 00:12:15.273 ] 00:12:15.273 12:40:14 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1148297 00:12:15.273 12:40:14 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:15.273 12:40:14 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:15.530 Running I/O for 10 seconds... 00:12:16.464 Latency(us) 00:12:16.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.464 Nvme0n1 : 1.00 14831.00 57.93 0.00 0.00 0.00 0.00 0.00 00:12:16.464 =================================================================================================================== 00:12:16.464 Total : 14831.00 57.93 0.00 0.00 0.00 0.00 0.00 00:12:16.464 00:12:17.395 12:40:16 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:17.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.395 Nvme0n1 : 2.00 14572.00 56.92 0.00 0.00 0.00 0.00 0.00 00:12:17.395 =================================================================================================================== 00:12:17.395 Total : 14572.00 56.92 0.00 0.00 0.00 0.00 0.00 00:12:17.395 00:12:17.653 true 00:12:17.653 12:40:16 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:17.653 12:40:16 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:17.911 12:40:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:17.911 12:40:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:17.911 12:40:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 1148297 00:12:18.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.477 Nvme0n1 : 3.00 14580.00 56.95 0.00 0.00 0.00 0.00 0.00 00:12:18.477 =================================================================================================================== 00:12:18.477 Total : 14580.00 56.95 0.00 0.00 0.00 0.00 0.00 00:12:18.477 00:12:19.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.410 Nvme0n1 : 4.00 14561.00 56.88 0.00 0.00 0.00 0.00 0.00 00:12:19.410 =================================================================================================================== 00:12:19.410 Total : 14561.00 56.88 0.00 0.00 0.00 0.00 0.00 00:12:19.410 00:12:20.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.785 Nvme0n1 : 5.00 14598.20 57.02 0.00 0.00 0.00 0.00 0.00 00:12:20.785 =================================================================================================================== 00:12:20.785 Total : 14598.20 57.02 0.00 0.00 0.00 0.00 0.00 00:12:20.785 00:12:21.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.351 Nvme0n1 : 6.00 14592.00 57.00 0.00 0.00 0.00 0.00 0.00 00:12:21.351 =================================================================================================================== 00:12:21.351 Total : 14592.00 57.00 0.00 0.00 0.00 0.00 0.00 00:12:21.351 00:12:22.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.768 Nvme0n1 : 7.00 14618.14 57.10 0.00 0.00 0.00 0.00 0.00 00:12:22.768 =================================================================================================================== 00:12:22.768 Total : 14618.14 57.10 0.00 0.00 0.00 0.00 0.00 00:12:22.768 00:12:23.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.701 Nvme0n1 : 8.00 14650.38 57.23 0.00 0.00 0.00 0.00 0.00 00:12:23.701 =================================================================================================================== 00:12:23.701 Total : 14650.38 57.23 0.00 0.00 0.00 0.00 0.00 00:12:23.701 00:12:24.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.636 Nvme0n1 : 9.00 14653.67 57.24 0.00 0.00 0.00 0.00 0.00 00:12:24.636 =================================================================================================================== 00:12:24.636 Total : 14653.67 57.24 0.00 0.00 0.00 0.00 0.00 00:12:24.636 00:12:25.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.569 Nvme0n1 : 10.00 14655.90 57.25 0.00 0.00 0.00 0.00 0.00 00:12:25.569 =================================================================================================================== 00:12:25.569 Total : 14655.90 57.25 0.00 0.00 0.00 0.00 0.00 00:12:25.569 00:12:25.569 00:12:25.569 Latency(us) 00:12:25.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.569 Nvme0n1 : 10.00 14663.47 57.28 0.00 0.00 8724.65 3070.48 16699.54 00:12:25.569 =================================================================================================================== 00:12:25.569 Total : 14663.47 57.28 0.00 0.00 8724.65 3070.48 16699.54 00:12:25.569 0 00:12:25.569 12:40:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1148161 00:12:25.569 12:40:24 -- common/autotest_common.sh@936 -- # '[' -z 1148161 ']' 00:12:25.569 12:40:24 -- common/autotest_common.sh@940 -- # kill -0 1148161 00:12:25.569 12:40:24 -- common/autotest_common.sh@941 -- # uname 00:12:25.569 12:40:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:25.569 12:40:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1148161 00:12:25.569 12:40:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:25.569 12:40:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:25.569 12:40:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1148161' 00:12:25.569 killing process with pid 1148161 00:12:25.569 12:40:24 -- common/autotest_common.sh@955 -- # kill 1148161 00:12:25.569 Received shutdown signal, test time was about 10.000000 seconds 00:12:25.569 00:12:25.569 Latency(us) 00:12:25.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.569 =================================================================================================================== 00:12:25.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:25.570 12:40:24 -- common/autotest_common.sh@960 -- # wait 1148161 00:12:25.828 12:40:24 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:26.085 12:40:25 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:26.085 12:40:25 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:26.343 12:40:25 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:26.343 12:40:25 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:12:26.343 12:40:25 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1145661 00:12:26.343 12:40:25 -- target/nvmf_lvs_grow.sh@74 -- # wait 1145661 00:12:26.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1145661 Killed "${NVMF_APP[@]}" "$@" 00:12:26.343 12:40:25 -- target/nvmf_lvs_grow.sh@74 -- # true 00:12:26.343 12:40:25 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:12:26.343 12:40:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:26.343 12:40:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:26.343 12:40:25 -- common/autotest_common.sh@10 -- # set +x 00:12:26.343 12:40:25 -- nvmf/common.sh@470 -- # nvmfpid=1149503 00:12:26.343 12:40:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:26.343 12:40:25 -- nvmf/common.sh@471 -- # waitforlisten 1149503 00:12:26.343 12:40:25 -- common/autotest_common.sh@817 -- # '[' -z 1149503 ']' 00:12:26.343 12:40:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.343 12:40:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:26.343 12:40:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.343 12:40:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:26.343 12:40:25 -- common/autotest_common.sh@10 -- # set +x 00:12:26.343 [2024-04-16 12:40:25.379718] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:26.343 [2024-04-16 12:40:25.379799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.601 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.601 [2024-04-16 12:40:25.457129] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.601 [2024-04-16 12:40:25.564469] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.601 [2024-04-16 12:40:25.564524] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.601 [2024-04-16 12:40:25.564538] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.601 [2024-04-16 12:40:25.564574] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.601 [2024-04-16 12:40:25.564586] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.601 [2024-04-16 12:40:25.564613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.858 12:40:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:26.858 12:40:25 -- common/autotest_common.sh@850 -- # return 0 00:12:26.858 12:40:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:26.858 12:40:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:26.858 12:40:25 -- common/autotest_common.sh@10 -- # set +x 00:12:26.858 12:40:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.858 12:40:25 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:26.858 [2024-04-16 12:40:25.926004] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:26.858 [2024-04-16 12:40:25.926169] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:26.858 [2024-04-16 12:40:25.926220] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:27.115 12:40:25 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:12:27.115 12:40:25 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 5afa2028-8b80-4015-aa13-c41f8ca46d11 00:12:27.115 12:40:25 -- common/autotest_common.sh@885 -- # local bdev_name=5afa2028-8b80-4015-aa13-c41f8ca46d11 00:12:27.115 12:40:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:27.115 12:40:25 -- common/autotest_common.sh@887 -- # local i 00:12:27.115 12:40:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:27.115 12:40:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:27.115 12:40:25 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:27.373 12:40:26 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5afa2028-8b80-4015-aa13-c41f8ca46d11 -t 2000 00:12:27.373 [ 00:12:27.373 { 00:12:27.373 "name": "5afa2028-8b80-4015-aa13-c41f8ca46d11", 00:12:27.373 "aliases": [ 00:12:27.373 "lvs/lvol" 00:12:27.373 ], 00:12:27.373 "product_name": "Logical Volume", 00:12:27.373 "block_size": 4096, 00:12:27.373 "num_blocks": 38912, 00:12:27.373 "uuid": "5afa2028-8b80-4015-aa13-c41f8ca46d11", 00:12:27.373 "assigned_rate_limits": { 00:12:27.373 "rw_ios_per_sec": 0, 00:12:27.373 "rw_mbytes_per_sec": 0, 00:12:27.373 "r_mbytes_per_sec": 0, 00:12:27.373 "w_mbytes_per_sec": 0 00:12:27.373 }, 00:12:27.373 "claimed": false, 00:12:27.373 "zoned": false, 00:12:27.373 "supported_io_types": { 00:12:27.373 "read": true, 00:12:27.373 "write": true, 00:12:27.373 "unmap": true, 00:12:27.373 "write_zeroes": true, 00:12:27.373 "flush": false, 00:12:27.373 "reset": true, 00:12:27.373 "compare": false, 00:12:27.373 "compare_and_write": false, 00:12:27.373 "abort": false, 00:12:27.373 "nvme_admin": false, 00:12:27.373 "nvme_io": false 00:12:27.373 }, 00:12:27.373 "driver_specific": { 00:12:27.373 "lvol": { 00:12:27.373 "lvol_store_uuid": "899b5b64-8d2c-459e-bd51-eaacac412d97", 00:12:27.373 "base_bdev": "aio_bdev", 00:12:27.373 "thin_provision": false, 00:12:27.373 "snapshot": false, 00:12:27.373 "clone": false, 00:12:27.373 "esnap_clone": false 00:12:27.373 } 00:12:27.373 } 00:12:27.373 } 00:12:27.373 ] 00:12:27.373 12:40:26 -- common/autotest_common.sh@893 -- # return 0 00:12:27.373 12:40:26 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:27.373 12:40:26 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:12:27.630 12:40:26 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:12:27.630 12:40:26 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:27.630 12:40:26 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:12:27.888 12:40:26 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:12:27.888 12:40:26 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.146 [2024-04-16 12:40:27.146906] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:28.146 12:40:27 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:28.146 12:40:27 -- common/autotest_common.sh@638 -- # local es=0 00:12:28.146 12:40:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:28.146 12:40:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.146 12:40:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:28.146 12:40:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.146 12:40:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:28.146 12:40:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.146 12:40:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:28.146 12:40:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.146 12:40:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:28.146 12:40:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:28.404 request: 00:12:28.404 { 00:12:28.404 "uuid": "899b5b64-8d2c-459e-bd51-eaacac412d97", 00:12:28.404 "method": "bdev_lvol_get_lvstores", 00:12:28.404 "req_id": 1 00:12:28.404 } 00:12:28.404 Got JSON-RPC error response 00:12:28.404 response: 00:12:28.404 { 00:12:28.404 "code": -19, 00:12:28.404 "message": "No such device" 00:12:28.404 } 00:12:28.404 12:40:27 -- common/autotest_common.sh@641 -- # es=1 00:12:28.404 12:40:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:28.662 12:40:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:28.662 12:40:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:28.662 12:40:27 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:28.920 aio_bdev 00:12:28.920 12:40:27 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5afa2028-8b80-4015-aa13-c41f8ca46d11 00:12:28.920 12:40:27 -- common/autotest_common.sh@885 -- # local bdev_name=5afa2028-8b80-4015-aa13-c41f8ca46d11 00:12:28.920 12:40:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:28.920 12:40:27 -- common/autotest_common.sh@887 -- # local i 00:12:28.920 12:40:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:28.920 12:40:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:28.920 12:40:27 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:29.178 12:40:28 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5afa2028-8b80-4015-aa13-c41f8ca46d11 -t 2000 00:12:29.178 [ 00:12:29.178 { 00:12:29.178 "name": "5afa2028-8b80-4015-aa13-c41f8ca46d11", 00:12:29.178 "aliases": [ 00:12:29.178 "lvs/lvol" 00:12:29.178 ], 00:12:29.178 "product_name": "Logical Volume", 00:12:29.178 "block_size": 4096, 00:12:29.178 "num_blocks": 38912, 00:12:29.178 "uuid": "5afa2028-8b80-4015-aa13-c41f8ca46d11", 00:12:29.178 "assigned_rate_limits": { 00:12:29.178 "rw_ios_per_sec": 0, 00:12:29.178 "rw_mbytes_per_sec": 0, 00:12:29.179 "r_mbytes_per_sec": 0, 00:12:29.179 "w_mbytes_per_sec": 0 00:12:29.179 }, 00:12:29.179 "claimed": false, 00:12:29.179 "zoned": false, 00:12:29.179 "supported_io_types": { 00:12:29.179 "read": true, 00:12:29.179 "write": true, 00:12:29.179 "unmap": true, 00:12:29.179 "write_zeroes": true, 00:12:29.179 "flush": false, 00:12:29.179 "reset": true, 00:12:29.179 "compare": false, 00:12:29.179 "compare_and_write": false, 00:12:29.179 "abort": false, 00:12:29.179 "nvme_admin": false, 00:12:29.179 "nvme_io": false 00:12:29.179 }, 00:12:29.179 "driver_specific": { 00:12:29.179 "lvol": { 00:12:29.179 "lvol_store_uuid": "899b5b64-8d2c-459e-bd51-eaacac412d97", 00:12:29.179 "base_bdev": "aio_bdev", 00:12:29.179 "thin_provision": false, 00:12:29.179 "snapshot": false, 00:12:29.179 "clone": false, 00:12:29.179 "esnap_clone": false 00:12:29.179 } 00:12:29.179 } 00:12:29.179 } 00:12:29.179 ] 00:12:29.436 12:40:28 -- common/autotest_common.sh@893 -- # return 0 00:12:29.436 12:40:28 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:29.436 12:40:28 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:29.436 12:40:28 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:29.436 12:40:28 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:29.436 12:40:28 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:29.694 12:40:28 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:29.694 12:40:28 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5afa2028-8b80-4015-aa13-c41f8ca46d11 00:12:29.952 12:40:29 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 899b5b64-8d2c-459e-bd51-eaacac412d97 00:12:30.210 12:40:29 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:30.778 12:40:29 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:30.778 00:12:30.778 real 0m18.875s 00:12:30.778 user 0m47.249s 00:12:30.778 sys 0m5.068s 00:12:30.778 12:40:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:30.778 12:40:29 -- common/autotest_common.sh@10 -- # set +x 00:12:30.778 ************************************ 00:12:30.778 END TEST lvs_grow_dirty 00:12:30.778 ************************************ 00:12:30.778 12:40:29 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:30.778 12:40:29 -- common/autotest_common.sh@794 -- # type=--id 00:12:30.778 12:40:29 -- common/autotest_common.sh@795 -- # id=0 00:12:30.778 12:40:29 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:30.778 12:40:29 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:30.778 12:40:29 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:30.778 12:40:29 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:30.778 12:40:29 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:30.778 12:40:29 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:30.778 nvmf_trace.0 00:12:30.778 12:40:29 -- common/autotest_common.sh@809 -- # return 0 00:12:30.778 12:40:29 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:30.778 12:40:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:30.778 12:40:29 -- nvmf/common.sh@117 -- # sync 00:12:30.778 12:40:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.778 12:40:29 -- nvmf/common.sh@120 -- # set +e 00:12:30.778 12:40:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.778 12:40:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.778 rmmod nvme_tcp 00:12:30.778 rmmod nvme_fabrics 00:12:30.778 rmmod nvme_keyring 00:12:30.778 12:40:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.778 12:40:29 -- nvmf/common.sh@124 -- # set -e 00:12:30.778 12:40:29 -- nvmf/common.sh@125 -- # return 0 00:12:30.778 12:40:29 -- nvmf/common.sh@478 -- # '[' -n 1149503 ']' 00:12:30.778 12:40:29 -- nvmf/common.sh@479 -- # killprocess 1149503 00:12:30.778 12:40:29 -- common/autotest_common.sh@936 -- # '[' -z 1149503 ']' 00:12:30.778 12:40:29 -- common/autotest_common.sh@940 -- # kill -0 1149503 00:12:30.778 12:40:29 -- common/autotest_common.sh@941 -- # uname 00:12:30.778 12:40:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.778 12:40:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1149503 00:12:30.778 12:40:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.778 12:40:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.778 12:40:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1149503' 00:12:30.778 killing process with pid 1149503 00:12:30.778 12:40:29 -- common/autotest_common.sh@955 -- # kill 1149503 00:12:30.778 12:40:29 -- common/autotest_common.sh@960 -- # wait 1149503 00:12:31.036 12:40:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:31.036 12:40:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:31.036 12:40:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:31.036 12:40:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.036 12:40:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.036 12:40:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.036 12:40:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.036 12:40:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.602 12:40:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.602 00:12:33.602 real 0m41.830s 00:12:33.602 user 1m9.549s 00:12:33.602 sys 0m9.252s 00:12:33.602 12:40:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:33.602 12:40:32 -- common/autotest_common.sh@10 -- # set +x 00:12:33.602 ************************************ 00:12:33.602 END TEST nvmf_lvs_grow 00:12:33.602 ************************************ 00:12:33.602 12:40:32 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:33.602 12:40:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:33.602 12:40:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.602 12:40:32 -- common/autotest_common.sh@10 -- # set +x 00:12:33.602 ************************************ 00:12:33.602 START TEST nvmf_bdev_io_wait 00:12:33.602 ************************************ 00:12:33.602 12:40:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:33.602 * Looking for test storage... 00:12:33.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.602 12:40:32 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.602 12:40:32 -- nvmf/common.sh@7 -- # uname -s 00:12:33.602 12:40:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.602 12:40:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.602 12:40:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.602 12:40:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.602 12:40:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.602 12:40:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.602 12:40:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.602 12:40:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.602 12:40:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.602 12:40:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.602 12:40:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:33.602 12:40:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:33.602 12:40:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.602 12:40:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.602 12:40:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.602 12:40:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.602 12:40:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.602 12:40:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.602 12:40:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.602 12:40:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.602 12:40:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.602 12:40:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.602 12:40:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.602 12:40:32 -- paths/export.sh@5 -- # export PATH 00:12:33.602 12:40:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.602 12:40:32 -- nvmf/common.sh@47 -- # : 0 00:12:33.602 12:40:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.602 12:40:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.602 12:40:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.602 12:40:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.602 12:40:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.602 12:40:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.602 12:40:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.602 12:40:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.602 12:40:32 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.602 12:40:32 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.602 12:40:32 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:33.602 12:40:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:33.602 12:40:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.602 12:40:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:33.602 12:40:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:33.602 12:40:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:33.602 12:40:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.602 12:40:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.602 12:40:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.602 12:40:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:33.602 12:40:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:33.602 12:40:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.602 12:40:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.132 12:40:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:36.132 12:40:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.132 12:40:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.132 12:40:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.132 12:40:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.132 12:40:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.132 12:40:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.132 12:40:34 -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.132 12:40:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.132 12:40:34 -- nvmf/common.sh@296 -- # e810=() 00:12:36.132 12:40:34 -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.132 12:40:34 -- nvmf/common.sh@297 -- # x722=() 00:12:36.132 12:40:34 -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.132 12:40:34 -- nvmf/common.sh@298 -- # mlx=() 00:12:36.132 12:40:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.132 12:40:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.132 12:40:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.132 12:40:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.132 12:40:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.132 12:40:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.132 12:40:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:36.132 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:36.132 12:40:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.132 12:40:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:36.132 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:36.132 12:40:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.132 12:40:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.132 12:40:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.132 12:40:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:36.132 12:40:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.132 12:40:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:36.132 Found net devices under 0000:82:00.0: cvl_0_0 00:12:36.132 12:40:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.132 12:40:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.132 12:40:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.132 12:40:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:36.132 12:40:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.132 12:40:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:36.132 Found net devices under 0000:82:00.1: cvl_0_1 00:12:36.132 12:40:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.132 12:40:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:36.132 12:40:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:36.132 12:40:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:36.132 12:40:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:36.132 12:40:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.132 12:40:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.132 12:40:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.132 12:40:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.132 12:40:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.132 12:40:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.132 12:40:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.132 12:40:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.132 12:40:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.132 12:40:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.132 12:40:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.132 12:40:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.132 12:40:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.132 12:40:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.132 12:40:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.132 12:40:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.132 12:40:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.132 12:40:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.132 12:40:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.132 12:40:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:12:36.132 00:12:36.132 --- 10.0.0.2 ping statistics --- 00:12:36.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.132 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:36.132 12:40:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:36.132 00:12:36.132 --- 10.0.0.1 ping statistics --- 00:12:36.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.132 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:36.132 12:40:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.132 12:40:34 -- nvmf/common.sh@411 -- # return 0 00:12:36.132 12:40:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:36.132 12:40:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.132 12:40:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:36.133 12:40:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:36.133 12:40:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.133 12:40:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:36.133 12:40:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:36.133 12:40:34 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:36.133 12:40:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:36.133 12:40:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:36.133 12:40:34 -- common/autotest_common.sh@10 -- # set +x 00:12:36.133 12:40:34 -- nvmf/common.sh@470 -- # nvmfpid=1152453 00:12:36.133 12:40:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:36.133 12:40:34 -- nvmf/common.sh@471 -- # waitforlisten 1152453 00:12:36.133 12:40:34 -- common/autotest_common.sh@817 -- # '[' -z 1152453 ']' 00:12:36.133 12:40:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.133 12:40:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:36.133 12:40:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.133 12:40:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:36.133 12:40:34 -- common/autotest_common.sh@10 -- # set +x 00:12:36.133 [2024-04-16 12:40:35.024977] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:36.133 [2024-04-16 12:40:35.025061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.133 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.133 [2024-04-16 12:40:35.100576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.390 [2024-04-16 12:40:35.213622] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.390 [2024-04-16 12:40:35.213674] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.390 [2024-04-16 12:40:35.213689] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.390 [2024-04-16 12:40:35.213701] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.390 [2024-04-16 12:40:35.213711] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.390 [2024-04-16 12:40:35.213772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.390 [2024-04-16 12:40:35.213849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.391 [2024-04-16 12:40:35.213792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.391 [2024-04-16 12:40:35.213852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.955 12:40:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:36.955 12:40:35 -- common/autotest_common.sh@850 -- # return 0 00:12:36.955 12:40:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:36.955 12:40:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:36.955 12:40:35 -- common/autotest_common.sh@10 -- # set +x 00:12:36.955 12:40:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.955 12:40:36 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:36.955 12:40:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.955 12:40:36 -- common/autotest_common.sh@10 -- # set +x 00:12:36.955 12:40:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.955 12:40:36 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:36.955 12:40:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.955 12:40:36 -- common/autotest_common.sh@10 -- # set +x 00:12:37.213 12:40:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.213 12:40:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.213 12:40:36 -- common/autotest_common.sh@10 -- # set +x 00:12:37.213 [2024-04-16 12:40:36.081121] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.213 12:40:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:37.213 12:40:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.213 12:40:36 -- common/autotest_common.sh@10 -- # set +x 00:12:37.213 Malloc0 00:12:37.213 12:40:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:37.213 12:40:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.213 12:40:36 -- common/autotest_common.sh@10 -- # set +x 00:12:37.213 12:40:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:37.213 12:40:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.213 12:40:36 -- common/autotest_common.sh@10 -- # set +x 00:12:37.213 12:40:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.213 12:40:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.213 12:40:36 -- common/autotest_common.sh@10 -- # set +x 00:12:37.213 [2024-04-16 12:40:36.150248] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.213 12:40:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1152610 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@30 -- # READ_PID=1152611 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:37.213 12:40:36 -- nvmf/common.sh@521 -- # config=() 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1152614 00:12:37.213 12:40:36 -- nvmf/common.sh@521 -- # local subsystem config 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:37.213 12:40:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:37.213 12:40:36 -- nvmf/common.sh@521 -- # config=() 00:12:37.213 12:40:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:37.213 { 00:12:37.213 "params": { 00:12:37.213 "name": "Nvme$subsystem", 00:12:37.213 "trtype": "$TEST_TRANSPORT", 00:12:37.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.213 "adrfam": "ipv4", 00:12:37.213 "trsvcid": "$NVMF_PORT", 00:12:37.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.213 "hdgst": ${hdgst:-false}, 00:12:37.213 "ddgst": ${ddgst:-false} 00:12:37.213 }, 00:12:37.213 "method": "bdev_nvme_attach_controller" 00:12:37.213 } 00:12:37.213 EOF 00:12:37.213 )") 00:12:37.213 12:40:36 -- nvmf/common.sh@521 -- # local subsystem config 00:12:37.213 12:40:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:37.213 12:40:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:37.213 { 00:12:37.213 "params": { 00:12:37.213 "name": "Nvme$subsystem", 00:12:37.213 "trtype": "$TEST_TRANSPORT", 00:12:37.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.213 "adrfam": "ipv4", 00:12:37.213 "trsvcid": "$NVMF_PORT", 00:12:37.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.213 "hdgst": ${hdgst:-false}, 00:12:37.213 "ddgst": ${ddgst:-false} 00:12:37.213 }, 00:12:37.213 "method": "bdev_nvme_attach_controller" 00:12:37.213 } 00:12:37.213 EOF 00:12:37.213 )") 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1152616 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:37.213 12:40:36 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:37.214 12:40:36 -- target/bdev_io_wait.sh@35 -- # sync 00:12:37.214 12:40:36 -- nvmf/common.sh@521 -- # config=() 00:12:37.214 12:40:36 -- nvmf/common.sh@521 -- # local subsystem config 00:12:37.214 12:40:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:37.214 12:40:36 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:37.214 12:40:36 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:37.214 12:40:36 -- nvmf/common.sh@543 -- # cat 00:12:37.214 12:40:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:37.214 { 00:12:37.214 "params": { 00:12:37.214 "name": "Nvme$subsystem", 00:12:37.214 "trtype": "$TEST_TRANSPORT", 00:12:37.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.214 "adrfam": "ipv4", 00:12:37.214 "trsvcid": "$NVMF_PORT", 00:12:37.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.214 "hdgst": ${hdgst:-false}, 00:12:37.214 "ddgst": ${ddgst:-false} 00:12:37.214 }, 00:12:37.214 "method": "bdev_nvme_attach_controller" 00:12:37.214 } 00:12:37.214 EOF 00:12:37.214 )") 00:12:37.214 12:40:36 -- nvmf/common.sh@521 -- # config=() 00:12:37.214 12:40:36 -- nvmf/common.sh@521 -- # local subsystem config 00:12:37.214 12:40:36 -- nvmf/common.sh@543 -- # cat 00:12:37.214 12:40:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:37.214 12:40:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:37.214 { 00:12:37.214 "params": { 00:12:37.214 "name": "Nvme$subsystem", 00:12:37.214 "trtype": "$TEST_TRANSPORT", 00:12:37.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.214 "adrfam": "ipv4", 00:12:37.214 "trsvcid": "$NVMF_PORT", 00:12:37.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.214 "hdgst": ${hdgst:-false}, 00:12:37.214 "ddgst": ${ddgst:-false} 00:12:37.214 }, 00:12:37.214 "method": "bdev_nvme_attach_controller" 00:12:37.214 } 00:12:37.214 EOF 00:12:37.214 )") 00:12:37.214 12:40:36 -- nvmf/common.sh@543 -- # cat 00:12:37.214 12:40:36 -- target/bdev_io_wait.sh@37 -- # wait 1152610 00:12:37.214 12:40:36 -- nvmf/common.sh@543 -- # cat 00:12:37.214 12:40:36 -- nvmf/common.sh@545 -- # jq . 00:12:37.214 12:40:36 -- nvmf/common.sh@545 -- # jq . 00:12:37.214 12:40:36 -- nvmf/common.sh@545 -- # jq . 00:12:37.214 12:40:36 -- nvmf/common.sh@545 -- # jq . 00:12:37.214 12:40:36 -- nvmf/common.sh@546 -- # IFS=, 00:12:37.214 12:40:36 -- nvmf/common.sh@546 -- # IFS=, 00:12:37.214 12:40:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:37.214 "params": { 00:12:37.214 "name": "Nvme1", 00:12:37.214 "trtype": "tcp", 00:12:37.214 "traddr": "10.0.0.2", 00:12:37.214 "adrfam": "ipv4", 00:12:37.214 "trsvcid": "4420", 00:12:37.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.214 "hdgst": false, 00:12:37.214 "ddgst": false 00:12:37.214 }, 00:12:37.214 "method": "bdev_nvme_attach_controller" 00:12:37.214 }' 00:12:37.214 12:40:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:37.214 "params": { 00:12:37.214 "name": "Nvme1", 00:12:37.214 "trtype": "tcp", 00:12:37.214 "traddr": "10.0.0.2", 00:12:37.214 "adrfam": "ipv4", 00:12:37.214 "trsvcid": "4420", 00:12:37.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.214 "hdgst": false, 00:12:37.214 "ddgst": false 00:12:37.214 }, 00:12:37.214 "method": "bdev_nvme_attach_controller" 00:12:37.214 }' 00:12:37.214 12:40:36 -- nvmf/common.sh@546 -- # IFS=, 00:12:37.214 12:40:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:37.214 "params": { 00:12:37.214 "name": "Nvme1", 00:12:37.214 "trtype": "tcp", 00:12:37.214 "traddr": "10.0.0.2", 00:12:37.214 "adrfam": "ipv4", 00:12:37.214 "trsvcid": "4420", 00:12:37.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.214 "hdgst": false, 00:12:37.214 "ddgst": false 00:12:37.214 }, 00:12:37.214 "method": "bdev_nvme_attach_controller" 00:12:37.214 }' 00:12:37.214 12:40:36 -- nvmf/common.sh@546 -- # IFS=, 00:12:37.214 12:40:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:37.214 "params": { 00:12:37.214 "name": "Nvme1", 00:12:37.214 "trtype": "tcp", 00:12:37.214 "traddr": "10.0.0.2", 00:12:37.214 "adrfam": "ipv4", 00:12:37.214 "trsvcid": "4420", 00:12:37.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.214 "hdgst": false, 00:12:37.214 "ddgst": false 00:12:37.214 }, 00:12:37.214 "method": "bdev_nvme_attach_controller" 00:12:37.214 }' 00:12:37.214 [2024-04-16 12:40:36.197080] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:37.214 [2024-04-16 12:40:36.197080] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:37.214 [2024-04-16 12:40:36.197079] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:37.214 [2024-04-16 12:40:36.197170] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-16 12:40:36.197169] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:37.214 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:37.214 [2024-04-16 12:40:36.197172] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:37.214 [2024-04-16 12:40:36.197682] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:37.214 [2024-04-16 12:40:36.197751] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:37.214 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.471 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.471 [2024-04-16 12:40:36.381931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.471 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.471 [2024-04-16 12:40:36.476212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:37.471 [2024-04-16 12:40:36.480517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.471 [2024-04-16 12:40:36.484938] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:37.728 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.728 [2024-04-16 12:40:36.576737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:37.728 [2024-04-16 12:40:36.577925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.728 [2024-04-16 12:40:36.585599] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:37.728 [2024-04-16 12:40:36.653954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.728 [2024-04-16 12:40:36.677983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:37.728 [2024-04-16 12:40:36.686846] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:37.728 [2024-04-16 12:40:36.746772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:37.728 [2024-04-16 12:40:36.755462] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:37.728 Running I/O for 1 seconds... 00:12:37.985 Running I/O for 1 seconds... 00:12:37.985 Running I/O for 1 seconds... 00:12:37.985 Running I/O for 1 seconds... 00:12:38.919 00:12:38.919 Latency(us) 00:12:38.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.919 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:38.919 Nvme1n1 : 1.01 10342.05 40.40 0.00 0.00 12322.29 8301.23 19709.35 00:12:38.919 =================================================================================================================== 00:12:38.919 Total : 10342.05 40.40 0.00 0.00 12322.29 8301.23 19709.35 00:12:38.919 00:12:38.919 Latency(us) 00:12:38.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.919 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:38.919 Nvme1n1 : 1.01 8559.92 33.44 0.00 0.00 14887.43 7718.68 27962.03 00:12:38.919 =================================================================================================================== 00:12:38.919 Total : 8559.92 33.44 0.00 0.00 14887.43 7718.68 27962.03 00:12:38.919 00:12:38.920 Latency(us) 00:12:38.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.920 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:38.920 Nvme1n1 : 1.01 9323.84 36.42 0.00 0.00 13677.18 6359.42 25049.32 00:12:38.920 =================================================================================================================== 00:12:38.920 Total : 9323.84 36.42 0.00 0.00 13677.18 6359.42 25049.32 00:12:38.920 00:12:38.920 Latency(us) 00:12:38.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.920 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:38.920 Nvme1n1 : 1.00 204338.70 798.20 0.00 0.00 623.93 253.35 801.00 00:12:38.920 =================================================================================================================== 00:12:38.920 Total : 204338.70 798.20 0.00 0.00 623.93 253.35 801.00 00:12:39.486 12:40:38 -- target/bdev_io_wait.sh@38 -- # wait 1152611 00:12:39.486 12:40:38 -- target/bdev_io_wait.sh@39 -- # wait 1152614 00:12:39.486 12:40:38 -- target/bdev_io_wait.sh@40 -- # wait 1152616 00:12:39.486 12:40:38 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.486 12:40:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.486 12:40:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.486 12:40:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.486 12:40:38 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:39.486 12:40:38 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:39.486 12:40:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:39.486 12:40:38 -- nvmf/common.sh@117 -- # sync 00:12:39.486 12:40:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.486 12:40:38 -- nvmf/common.sh@120 -- # set +e 00:12:39.486 12:40:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.486 12:40:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.486 rmmod nvme_tcp 00:12:39.486 rmmod nvme_fabrics 00:12:39.486 rmmod nvme_keyring 00:12:39.486 12:40:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.486 12:40:38 -- nvmf/common.sh@124 -- # set -e 00:12:39.486 12:40:38 -- nvmf/common.sh@125 -- # return 0 00:12:39.486 12:40:38 -- nvmf/common.sh@478 -- # '[' -n 1152453 ']' 00:12:39.486 12:40:38 -- nvmf/common.sh@479 -- # killprocess 1152453 00:12:39.486 12:40:38 -- common/autotest_common.sh@936 -- # '[' -z 1152453 ']' 00:12:39.486 12:40:38 -- common/autotest_common.sh@940 -- # kill -0 1152453 00:12:39.486 12:40:38 -- common/autotest_common.sh@941 -- # uname 00:12:39.486 12:40:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:39.486 12:40:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1152453 00:12:39.486 12:40:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:39.486 12:40:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:39.486 12:40:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1152453' 00:12:39.486 killing process with pid 1152453 00:12:39.486 12:40:38 -- common/autotest_common.sh@955 -- # kill 1152453 00:12:39.486 12:40:38 -- common/autotest_common.sh@960 -- # wait 1152453 00:12:39.745 12:40:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:39.745 12:40:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:39.745 12:40:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:39.745 12:40:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.745 12:40:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.745 12:40:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.745 12:40:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.745 12:40:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.292 12:40:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:42.292 00:12:42.292 real 0m8.603s 00:12:42.292 user 0m20.355s 00:12:42.292 sys 0m4.005s 00:12:42.292 12:40:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:42.292 12:40:40 -- common/autotest_common.sh@10 -- # set +x 00:12:42.292 ************************************ 00:12:42.292 END TEST nvmf_bdev_io_wait 00:12:42.292 ************************************ 00:12:42.292 12:40:40 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:42.292 12:40:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:42.292 12:40:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.292 12:40:40 -- common/autotest_common.sh@10 -- # set +x 00:12:42.292 ************************************ 00:12:42.292 START TEST nvmf_queue_depth 00:12:42.292 ************************************ 00:12:42.292 12:40:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:42.292 * Looking for test storage... 00:12:42.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.292 12:40:40 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.292 12:40:40 -- nvmf/common.sh@7 -- # uname -s 00:12:42.292 12:40:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.292 12:40:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.292 12:40:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.292 12:40:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.292 12:40:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.292 12:40:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.292 12:40:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.292 12:40:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.292 12:40:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.292 12:40:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.292 12:40:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:42.292 12:40:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:42.292 12:40:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.292 12:40:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.292 12:40:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.292 12:40:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.292 12:40:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.292 12:40:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.292 12:40:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.292 12:40:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.293 12:40:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.293 12:40:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.293 12:40:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.293 12:40:40 -- paths/export.sh@5 -- # export PATH 00:12:42.293 12:40:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.293 12:40:40 -- nvmf/common.sh@47 -- # : 0 00:12:42.293 12:40:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.293 12:40:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.293 12:40:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.293 12:40:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.293 12:40:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.293 12:40:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.293 12:40:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.293 12:40:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.293 12:40:40 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:42.293 12:40:40 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:42.293 12:40:40 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.293 12:40:40 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:42.293 12:40:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:42.293 12:40:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.293 12:40:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:42.293 12:40:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:42.293 12:40:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:42.293 12:40:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.293 12:40:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.293 12:40:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.293 12:40:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:42.293 12:40:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:42.293 12:40:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:42.293 12:40:40 -- common/autotest_common.sh@10 -- # set +x 00:12:44.828 12:40:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:44.828 12:40:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:44.828 12:40:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:44.828 12:40:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:44.828 12:40:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:44.828 12:40:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:44.828 12:40:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:44.828 12:40:43 -- nvmf/common.sh@295 -- # net_devs=() 00:12:44.828 12:40:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:44.828 12:40:43 -- nvmf/common.sh@296 -- # e810=() 00:12:44.828 12:40:43 -- nvmf/common.sh@296 -- # local -ga e810 00:12:44.828 12:40:43 -- nvmf/common.sh@297 -- # x722=() 00:12:44.828 12:40:43 -- nvmf/common.sh@297 -- # local -ga x722 00:12:44.828 12:40:43 -- nvmf/common.sh@298 -- # mlx=() 00:12:44.828 12:40:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:44.828 12:40:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.828 12:40:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:44.828 12:40:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:44.828 12:40:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:44.828 12:40:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.828 12:40:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:12:44.828 Found 0000:82:00.0 (0x8086 - 0x159b) 00:12:44.828 12:40:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.828 12:40:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:12:44.828 Found 0000:82:00.1 (0x8086 - 0x159b) 00:12:44.828 12:40:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:44.828 12:40:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.828 12:40:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.828 12:40:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:44.828 12:40:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.828 12:40:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:12:44.828 Found net devices under 0000:82:00.0: cvl_0_0 00:12:44.828 12:40:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.828 12:40:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.828 12:40:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.828 12:40:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:44.828 12:40:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.828 12:40:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:12:44.828 Found net devices under 0000:82:00.1: cvl_0_1 00:12:44.828 12:40:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.828 12:40:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:44.828 12:40:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:44.828 12:40:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:44.828 12:40:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:44.829 12:40:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.829 12:40:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.829 12:40:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.829 12:40:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:44.829 12:40:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.829 12:40:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.829 12:40:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:44.829 12:40:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.829 12:40:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.829 12:40:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:44.829 12:40:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:44.829 12:40:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.829 12:40:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.829 12:40:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.829 12:40:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.829 12:40:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:44.829 12:40:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.829 12:40:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.829 12:40:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.829 12:40:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:44.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:12:44.829 00:12:44.829 --- 10.0.0.2 ping statistics --- 00:12:44.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.829 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:12:44.829 12:40:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:44.829 00:12:44.829 --- 10.0.0.1 ping statistics --- 00:12:44.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.829 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:44.829 12:40:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.829 12:40:43 -- nvmf/common.sh@411 -- # return 0 00:12:44.829 12:40:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:44.829 12:40:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.829 12:40:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:44.829 12:40:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:44.829 12:40:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.829 12:40:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:44.829 12:40:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:44.829 12:40:43 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:44.829 12:40:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:44.829 12:40:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:44.829 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:44.829 12:40:43 -- nvmf/common.sh@470 -- # nvmfpid=1155137 00:12:44.829 12:40:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:44.829 12:40:43 -- nvmf/common.sh@471 -- # waitforlisten 1155137 00:12:44.829 12:40:43 -- common/autotest_common.sh@817 -- # '[' -z 1155137 ']' 00:12:44.829 12:40:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.829 12:40:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:44.829 12:40:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.829 12:40:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:44.829 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:44.829 [2024-04-16 12:40:43.577422] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:44.829 [2024-04-16 12:40:43.577506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.829 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.829 [2024-04-16 12:40:43.652346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.829 [2024-04-16 12:40:43.758862] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.829 [2024-04-16 12:40:43.758929] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.829 [2024-04-16 12:40:43.758944] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.829 [2024-04-16 12:40:43.758955] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.829 [2024-04-16 12:40:43.758964] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.829 [2024-04-16 12:40:43.758989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.829 12:40:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:44.829 12:40:43 -- common/autotest_common.sh@850 -- # return 0 00:12:44.829 12:40:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:44.829 12:40:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:44.829 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.087 12:40:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.087 12:40:43 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.087 12:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.087 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.087 [2024-04-16 12:40:43.906539] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.087 12:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.087 12:40:43 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:45.087 12:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.087 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.087 Malloc0 00:12:45.088 12:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.088 12:40:43 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.088 12:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.088 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.088 12:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.088 12:40:43 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.088 12:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.088 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.088 12:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.088 12:40:43 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.088 12:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.088 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.088 [2024-04-16 12:40:43.962443] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.088 12:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.088 12:40:43 -- target/queue_depth.sh@30 -- # bdevperf_pid=1155276 00:12:45.088 12:40:43 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:45.088 12:40:43 -- target/queue_depth.sh@33 -- # waitforlisten 1155276 /var/tmp/bdevperf.sock 00:12:45.088 12:40:43 -- common/autotest_common.sh@817 -- # '[' -z 1155276 ']' 00:12:45.088 12:40:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:45.088 12:40:43 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:45.088 12:40:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:45.088 12:40:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:45.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:45.088 12:40:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:45.088 12:40:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.088 [2024-04-16 12:40:44.011700] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:12:45.088 [2024-04-16 12:40:44.011777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155276 ] 00:12:45.088 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.088 [2024-04-16 12:40:44.081968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.346 [2024-04-16 12:40:44.193694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.346 12:40:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:45.346 12:40:44 -- common/autotest_common.sh@850 -- # return 0 00:12:45.346 12:40:44 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:45.346 12:40:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.346 12:40:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.346 NVMe0n1 00:12:45.346 12:40:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.346 12:40:44 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:45.604 Running I/O for 10 seconds... 00:12:55.574 00:12:55.574 Latency(us) 00:12:55.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.574 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:55.574 Verification LBA range: start 0x0 length 0x4000 00:12:55.574 NVMe0n1 : 10.09 8414.55 32.87 0.00 0.00 121212.88 21748.24 78060.66 00:12:55.574 =================================================================================================================== 00:12:55.574 Total : 8414.55 32.87 0.00 0.00 121212.88 21748.24 78060.66 00:12:55.574 0 00:12:55.574 12:40:54 -- target/queue_depth.sh@39 -- # killprocess 1155276 00:12:55.574 12:40:54 -- common/autotest_common.sh@936 -- # '[' -z 1155276 ']' 00:12:55.574 12:40:54 -- common/autotest_common.sh@940 -- # kill -0 1155276 00:12:55.574 12:40:54 -- common/autotest_common.sh@941 -- # uname 00:12:55.575 12:40:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:55.575 12:40:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1155276 00:12:55.833 12:40:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:55.833 12:40:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:55.833 12:40:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1155276' 00:12:55.833 killing process with pid 1155276 00:12:55.833 12:40:54 -- common/autotest_common.sh@955 -- # kill 1155276 00:12:55.833 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.833 00:12:55.833 Latency(us) 00:12:55.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.833 =================================================================================================================== 00:12:55.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:55.833 12:40:54 -- common/autotest_common.sh@960 -- # wait 1155276 00:12:56.090 12:40:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:56.090 12:40:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:56.090 12:40:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:56.091 12:40:54 -- nvmf/common.sh@117 -- # sync 00:12:56.091 12:40:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.091 12:40:54 -- nvmf/common.sh@120 -- # set +e 00:12:56.091 12:40:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.091 12:40:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.091 rmmod nvme_tcp 00:12:56.091 rmmod nvme_fabrics 00:12:56.091 rmmod nvme_keyring 00:12:56.091 12:40:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.091 12:40:55 -- nvmf/common.sh@124 -- # set -e 00:12:56.091 12:40:55 -- nvmf/common.sh@125 -- # return 0 00:12:56.091 12:40:55 -- nvmf/common.sh@478 -- # '[' -n 1155137 ']' 00:12:56.091 12:40:55 -- nvmf/common.sh@479 -- # killprocess 1155137 00:12:56.091 12:40:55 -- common/autotest_common.sh@936 -- # '[' -z 1155137 ']' 00:12:56.091 12:40:55 -- common/autotest_common.sh@940 -- # kill -0 1155137 00:12:56.091 12:40:55 -- common/autotest_common.sh@941 -- # uname 00:12:56.091 12:40:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:56.091 12:40:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1155137 00:12:56.091 12:40:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:56.091 12:40:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:56.091 12:40:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1155137' 00:12:56.091 killing process with pid 1155137 00:12:56.091 12:40:55 -- common/autotest_common.sh@955 -- # kill 1155137 00:12:56.091 12:40:55 -- common/autotest_common.sh@960 -- # wait 1155137 00:12:56.349 12:40:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:56.349 12:40:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:56.349 12:40:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:56.349 12:40:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.349 12:40:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.349 12:40:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.349 12:40:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.349 12:40:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.908 12:40:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:58.908 00:12:58.908 real 0m16.501s 00:12:58.908 user 0m22.453s 00:12:58.908 sys 0m3.626s 00:12:58.908 12:40:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:58.908 12:40:57 -- common/autotest_common.sh@10 -- # set +x 00:12:58.908 ************************************ 00:12:58.908 END TEST nvmf_queue_depth 00:12:58.908 ************************************ 00:12:58.908 12:40:57 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:58.908 12:40:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:58.908 12:40:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.908 12:40:57 -- common/autotest_common.sh@10 -- # set +x 00:12:58.908 ************************************ 00:12:58.908 START TEST nvmf_multipath 00:12:58.908 ************************************ 00:12:58.908 12:40:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:58.908 * Looking for test storage... 00:12:58.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.908 12:40:57 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.908 12:40:57 -- nvmf/common.sh@7 -- # uname -s 00:12:58.908 12:40:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.908 12:40:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.908 12:40:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.908 12:40:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.908 12:40:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.908 12:40:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.908 12:40:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.908 12:40:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.908 12:40:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.908 12:40:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.908 12:40:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:58.908 12:40:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:58.908 12:40:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.908 12:40:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.908 12:40:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.908 12:40:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.908 12:40:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.908 12:40:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.908 12:40:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.908 12:40:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.908 12:40:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.908 12:40:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.908 12:40:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.908 12:40:57 -- paths/export.sh@5 -- # export PATH 00:12:58.908 12:40:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.908 12:40:57 -- nvmf/common.sh@47 -- # : 0 00:12:58.908 12:40:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.908 12:40:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.908 12:40:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.908 12:40:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.908 12:40:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.908 12:40:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.908 12:40:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.908 12:40:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.908 12:40:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.908 12:40:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.908 12:40:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:58.908 12:40:57 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.908 12:40:57 -- target/multipath.sh@43 -- # nvmftestinit 00:12:58.908 12:40:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:58.908 12:40:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.908 12:40:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:58.908 12:40:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:58.908 12:40:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:58.908 12:40:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.908 12:40:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.908 12:40:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.908 12:40:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:58.908 12:40:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:58.908 12:40:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:58.908 12:40:57 -- common/autotest_common.sh@10 -- # set +x 00:13:01.444 12:40:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:01.444 12:40:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.444 12:40:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.444 12:40:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.444 12:40:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.444 12:40:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.444 12:40:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.444 12:40:59 -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.444 12:40:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.444 12:40:59 -- nvmf/common.sh@296 -- # e810=() 00:13:01.444 12:40:59 -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.444 12:40:59 -- nvmf/common.sh@297 -- # x722=() 00:13:01.444 12:40:59 -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.444 12:40:59 -- nvmf/common.sh@298 -- # mlx=() 00:13:01.444 12:40:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.444 12:40:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.444 12:40:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.444 12:40:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.444 12:40:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.444 12:40:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.444 12:40:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.444 12:40:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.445 12:40:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.445 12:40:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:01.445 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:01.445 12:40:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.445 12:40:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:01.445 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:01.445 12:40:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.445 12:40:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.445 12:40:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.445 12:40:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:01.445 12:40:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.445 12:40:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:01.445 Found net devices under 0000:82:00.0: cvl_0_0 00:13:01.445 12:40:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.445 12:40:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.445 12:40:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.445 12:40:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:01.445 12:40:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.445 12:40:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:01.445 Found net devices under 0000:82:00.1: cvl_0_1 00:13:01.445 12:40:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.445 12:40:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:01.445 12:40:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:01.445 12:40:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:01.445 12:40:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:01.445 12:40:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.445 12:40:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.445 12:40:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.445 12:40:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.445 12:40:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.445 12:40:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.445 12:40:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.445 12:40:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.445 12:40:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.445 12:40:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.445 12:40:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.445 12:40:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.445 12:40:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.445 12:41:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.445 12:41:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.445 12:41:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.445 12:41:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.445 12:41:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.445 12:41:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.445 12:41:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:13:01.445 00:13:01.445 --- 10.0.0.2 ping statistics --- 00:13:01.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.445 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:01.445 12:41:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:01.445 00:13:01.445 --- 10.0.0.1 ping statistics --- 00:13:01.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.445 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:01.445 12:41:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.445 12:41:00 -- nvmf/common.sh@411 -- # return 0 00:13:01.445 12:41:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:01.445 12:41:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.445 12:41:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:01.445 12:41:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:01.445 12:41:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.445 12:41:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:01.445 12:41:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:01.445 12:41:00 -- target/multipath.sh@45 -- # '[' -z ']' 00:13:01.445 12:41:00 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:01.445 only one NIC for nvmf test 00:13:01.445 12:41:00 -- target/multipath.sh@47 -- # nvmftestfini 00:13:01.445 12:41:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:01.445 12:41:00 -- nvmf/common.sh@117 -- # sync 00:13:01.445 12:41:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.445 12:41:00 -- nvmf/common.sh@120 -- # set +e 00:13:01.445 12:41:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.445 12:41:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.445 rmmod nvme_tcp 00:13:01.445 rmmod nvme_fabrics 00:13:01.445 rmmod nvme_keyring 00:13:01.445 12:41:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.445 12:41:00 -- nvmf/common.sh@124 -- # set -e 00:13:01.445 12:41:00 -- nvmf/common.sh@125 -- # return 0 00:13:01.445 12:41:00 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:01.445 12:41:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:01.445 12:41:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:01.445 12:41:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:01.445 12:41:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.445 12:41:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:01.445 12:41:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.445 12:41:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.445 12:41:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.348 12:41:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:03.348 12:41:02 -- target/multipath.sh@48 -- # exit 0 00:13:03.348 12:41:02 -- target/multipath.sh@1 -- # nvmftestfini 00:13:03.348 12:41:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:03.348 12:41:02 -- nvmf/common.sh@117 -- # sync 00:13:03.348 12:41:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.348 12:41:02 -- nvmf/common.sh@120 -- # set +e 00:13:03.348 12:41:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.348 12:41:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.348 12:41:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.348 12:41:02 -- nvmf/common.sh@124 -- # set -e 00:13:03.348 12:41:02 -- nvmf/common.sh@125 -- # return 0 00:13:03.348 12:41:02 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:03.348 12:41:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:03.348 12:41:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:03.348 12:41:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:03.348 12:41:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.348 12:41:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.348 12:41:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.348 12:41:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.348 12:41:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.348 12:41:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:03.348 00:13:03.348 real 0m4.715s 00:13:03.348 user 0m0.900s 00:13:03.348 sys 0m1.810s 00:13:03.348 12:41:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.348 12:41:02 -- common/autotest_common.sh@10 -- # set +x 00:13:03.348 ************************************ 00:13:03.348 END TEST nvmf_multipath 00:13:03.348 ************************************ 00:13:03.348 12:41:02 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:03.348 12:41:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:03.348 12:41:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.348 12:41:02 -- common/autotest_common.sh@10 -- # set +x 00:13:03.348 ************************************ 00:13:03.348 START TEST nvmf_zcopy 00:13:03.348 ************************************ 00:13:03.348 12:41:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:03.348 * Looking for test storage... 00:13:03.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.348 12:41:02 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.348 12:41:02 -- nvmf/common.sh@7 -- # uname -s 00:13:03.348 12:41:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.348 12:41:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.348 12:41:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.348 12:41:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.348 12:41:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.348 12:41:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.348 12:41:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.348 12:41:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.348 12:41:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.348 12:41:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.607 12:41:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:03.607 12:41:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:03.607 12:41:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.607 12:41:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.607 12:41:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.607 12:41:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.607 12:41:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.607 12:41:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.607 12:41:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.607 12:41:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.607 12:41:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.607 12:41:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.607 12:41:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.607 12:41:02 -- paths/export.sh@5 -- # export PATH 00:13:03.607 12:41:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.607 12:41:02 -- nvmf/common.sh@47 -- # : 0 00:13:03.607 12:41:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.607 12:41:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.607 12:41:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.607 12:41:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.607 12:41:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.607 12:41:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.607 12:41:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.607 12:41:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.607 12:41:02 -- target/zcopy.sh@12 -- # nvmftestinit 00:13:03.607 12:41:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:03.607 12:41:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.607 12:41:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:03.607 12:41:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:03.607 12:41:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:03.607 12:41:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.607 12:41:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.607 12:41:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.607 12:41:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:03.607 12:41:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:03.607 12:41:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:03.607 12:41:02 -- common/autotest_common.sh@10 -- # set +x 00:13:06.140 12:41:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:06.140 12:41:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.140 12:41:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.140 12:41:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.140 12:41:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.140 12:41:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.140 12:41:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.140 12:41:04 -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.140 12:41:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.140 12:41:04 -- nvmf/common.sh@296 -- # e810=() 00:13:06.140 12:41:04 -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.140 12:41:04 -- nvmf/common.sh@297 -- # x722=() 00:13:06.140 12:41:04 -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.140 12:41:04 -- nvmf/common.sh@298 -- # mlx=() 00:13:06.140 12:41:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.140 12:41:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.140 12:41:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.140 12:41:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.140 12:41:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.140 12:41:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.140 12:41:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:06.140 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:06.140 12:41:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.140 12:41:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:06.140 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:06.140 12:41:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.140 12:41:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.140 12:41:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.141 12:41:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.141 12:41:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.141 12:41:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:06.141 12:41:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.141 12:41:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:06.141 Found net devices under 0000:82:00.0: cvl_0_0 00:13:06.141 12:41:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.141 12:41:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.141 12:41:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.141 12:41:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:06.141 12:41:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.141 12:41:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:06.141 Found net devices under 0000:82:00.1: cvl_0_1 00:13:06.141 12:41:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.141 12:41:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:06.141 12:41:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:06.141 12:41:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:06.141 12:41:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:06.141 12:41:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:06.141 12:41:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.141 12:41:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.141 12:41:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.141 12:41:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.141 12:41:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.141 12:41:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.141 12:41:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.141 12:41:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.141 12:41:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.141 12:41:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.141 12:41:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.141 12:41:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.141 12:41:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.141 12:41:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.141 12:41:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.141 12:41:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.141 12:41:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.141 12:41:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.141 12:41:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.141 12:41:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:13:06.141 00:13:06.141 --- 10.0.0.2 ping statistics --- 00:13:06.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.141 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:06.141 12:41:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:13:06.141 00:13:06.141 --- 10.0.0.1 ping statistics --- 00:13:06.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.141 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:06.141 12:41:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.141 12:41:05 -- nvmf/common.sh@411 -- # return 0 00:13:06.141 12:41:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:06.141 12:41:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.141 12:41:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:06.141 12:41:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:06.141 12:41:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.141 12:41:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:06.141 12:41:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:06.141 12:41:05 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:06.141 12:41:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:06.141 12:41:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:06.141 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.141 12:41:05 -- nvmf/common.sh@470 -- # nvmfpid=1161052 00:13:06.141 12:41:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:06.141 12:41:05 -- nvmf/common.sh@471 -- # waitforlisten 1161052 00:13:06.141 12:41:05 -- common/autotest_common.sh@817 -- # '[' -z 1161052 ']' 00:13:06.141 12:41:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.141 12:41:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:06.141 12:41:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.141 12:41:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:06.141 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.141 [2024-04-16 12:41:05.101796] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:13:06.141 [2024-04-16 12:41:05.101893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.141 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.141 [2024-04-16 12:41:05.180867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.400 [2024-04-16 12:41:05.298185] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.400 [2024-04-16 12:41:05.298255] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.400 [2024-04-16 12:41:05.298282] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.400 [2024-04-16 12:41:05.298295] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.400 [2024-04-16 12:41:05.298306] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.400 [2024-04-16 12:41:05.298337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.400 12:41:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:06.400 12:41:05 -- common/autotest_common.sh@850 -- # return 0 00:13:06.400 12:41:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:06.400 12:41:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:06.400 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.400 12:41:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.400 12:41:05 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:06.400 12:41:05 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:06.400 12:41:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.400 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.400 [2024-04-16 12:41:05.444441] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.400 12:41:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.400 12:41:05 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:06.400 12:41:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.400 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.400 12:41:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.400 12:41:05 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.400 12:41:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.400 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.400 [2024-04-16 12:41:05.460704] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.400 12:41:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.400 12:41:05 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.400 12:41:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.400 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.658 12:41:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.658 12:41:05 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:06.658 12:41:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.658 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.658 malloc0 00:13:06.658 12:41:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.658 12:41:05 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:06.658 12:41:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.658 12:41:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.658 12:41:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.658 12:41:05 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:06.658 12:41:05 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:06.658 12:41:05 -- nvmf/common.sh@521 -- # config=() 00:13:06.658 12:41:05 -- nvmf/common.sh@521 -- # local subsystem config 00:13:06.658 12:41:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:06.658 12:41:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:06.658 { 00:13:06.658 "params": { 00:13:06.658 "name": "Nvme$subsystem", 00:13:06.658 "trtype": "$TEST_TRANSPORT", 00:13:06.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:06.658 "adrfam": "ipv4", 00:13:06.658 "trsvcid": "$NVMF_PORT", 00:13:06.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:06.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:06.658 "hdgst": ${hdgst:-false}, 00:13:06.658 "ddgst": ${ddgst:-false} 00:13:06.658 }, 00:13:06.658 "method": "bdev_nvme_attach_controller" 00:13:06.658 } 00:13:06.658 EOF 00:13:06.658 )") 00:13:06.658 12:41:05 -- nvmf/common.sh@543 -- # cat 00:13:06.658 12:41:05 -- nvmf/common.sh@545 -- # jq . 00:13:06.658 12:41:05 -- nvmf/common.sh@546 -- # IFS=, 00:13:06.658 12:41:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:06.658 "params": { 00:13:06.658 "name": "Nvme1", 00:13:06.658 "trtype": "tcp", 00:13:06.658 "traddr": "10.0.0.2", 00:13:06.658 "adrfam": "ipv4", 00:13:06.658 "trsvcid": "4420", 00:13:06.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:06.658 "hdgst": false, 00:13:06.658 "ddgst": false 00:13:06.658 }, 00:13:06.658 "method": "bdev_nvme_attach_controller" 00:13:06.658 }' 00:13:06.658 [2024-04-16 12:41:05.543692] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:13:06.658 [2024-04-16 12:41:05.543770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161194 ] 00:13:06.658 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.658 [2024-04-16 12:41:05.622927] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.916 [2024-04-16 12:41:05.738670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.916 [2024-04-16 12:41:05.747408] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:13:06.916 Running I/O for 10 seconds... 00:13:19.116 00:13:19.116 Latency(us) 00:13:19.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.116 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:19.116 Verification LBA range: start 0x0 length 0x1000 00:13:19.116 Nvme1n1 : 10.05 5550.08 43.36 0.00 0.00 22910.68 3203.98 41166.32 00:13:19.116 =================================================================================================================== 00:13:19.116 Total : 5550.08 43.36 0.00 0.00 22910.68 3203.98 41166.32 00:13:19.116 12:41:16 -- target/zcopy.sh@39 -- # perfpid=1162387 00:13:19.116 12:41:16 -- target/zcopy.sh@41 -- # xtrace_disable 00:13:19.116 12:41:16 -- common/autotest_common.sh@10 -- # set +x 00:13:19.116 12:41:16 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:19.116 12:41:16 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:19.116 12:41:16 -- nvmf/common.sh@521 -- # config=() 00:13:19.116 12:41:16 -- nvmf/common.sh@521 -- # local subsystem config 00:13:19.116 12:41:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:19.116 12:41:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:19.116 { 00:13:19.116 "params": { 00:13:19.116 "name": "Nvme$subsystem", 00:13:19.116 "trtype": "$TEST_TRANSPORT", 00:13:19.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:19.116 "adrfam": "ipv4", 00:13:19.116 "trsvcid": "$NVMF_PORT", 00:13:19.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:19.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:19.116 "hdgst": ${hdgst:-false}, 00:13:19.116 "ddgst": ${ddgst:-false} 00:13:19.116 }, 00:13:19.116 "method": "bdev_nvme_attach_controller" 00:13:19.116 } 00:13:19.116 EOF 00:13:19.116 )") 00:13:19.116 12:41:16 -- nvmf/common.sh@543 -- # cat 00:13:19.116 12:41:16 -- nvmf/common.sh@545 -- # jq . 00:13:19.116 [2024-04-16 12:41:16.324513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.324559] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 12:41:16 -- nvmf/common.sh@546 -- # IFS=, 00:13:19.116 12:41:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:19.116 "params": { 00:13:19.116 "name": "Nvme1", 00:13:19.116 "trtype": "tcp", 00:13:19.116 "traddr": "10.0.0.2", 00:13:19.116 "adrfam": "ipv4", 00:13:19.116 "trsvcid": "4420", 00:13:19.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:19.116 "hdgst": false, 00:13:19.116 "ddgst": false 00:13:19.116 }, 00:13:19.116 "method": "bdev_nvme_attach_controller" 00:13:19.116 }' 00:13:19.116 [2024-04-16 12:41:16.332477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.332513] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 [2024-04-16 12:41:16.340495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.340524] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 [2024-04-16 12:41:16.348518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.348543] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 [2024-04-16 12:41:16.356542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.356580] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 [2024-04-16 12:41:16.362944] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:13:19.116 [2024-04-16 12:41:16.363012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162387 ] 00:13:19.116 [2024-04-16 12:41:16.364561] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.364608] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 [2024-04-16 12:41:16.372590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.372629] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 [2024-04-16 12:41:16.380625] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.380648] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 [2024-04-16 12:41:16.388651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.388673] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 [2024-04-16 12:41:16.396662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.396684] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.116 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.116 [2024-04-16 12:41:16.404681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.116 [2024-04-16 12:41:16.404702] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.412698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.412719] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.420720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.420741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.428739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.428759] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.436762] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.436782] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.440377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.117 [2024-04-16 12:41:16.444793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.444817] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.452835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.452887] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.460828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.460866] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.468868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.468889] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.476893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.476917] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.484918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.484942] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.492931] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.492955] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.500958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.500983] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.509018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.509056] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.517007] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.517032] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.525030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.525053] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.533053] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.533078] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.541074] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.541099] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.549098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.549122] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.554310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.117 [2024-04-16 12:41:16.557119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.557143] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.563075] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:13:19.117 [2024-04-16 12:41:16.565140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.565164] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.573189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.573227] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.581211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.581247] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.589236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.589273] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.597246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.597281] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.605280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.605319] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.613301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.613341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.621298] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.621322] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.629350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.629389] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.637376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.637416] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.645367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.645392] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.653388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.653414] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.661431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.661459] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.669439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.669467] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.677460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.677487] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.685491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.685517] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.693516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.693543] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.701529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.701576] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.709551] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.709593] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.717581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.717606] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.725687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.725712] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.733705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.733727] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 Running I/O for 5 seconds... 00:13:19.117 [2024-04-16 12:41:16.741724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.741749] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.755579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.755622] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.767710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.767741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.780011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.780041] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.792073] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.117 [2024-04-16 12:41:16.792104] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.117 [2024-04-16 12:41:16.805681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.805707] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.816783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.816809] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.828499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.828529] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.840270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.840300] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.852247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.852277] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.864102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.864133] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.876246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.876276] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.887655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.887680] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.899205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.899235] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.911277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.911307] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.923380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.923411] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.934796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.934822] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.946454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.946484] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.958388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.958419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.970428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.970458] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.982487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.982517] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:16.994138] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:16.994177] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.007600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.007641] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.018690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.018715] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.030978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.031009] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.043123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.043153] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.054574] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.054616] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.066377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.066407] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.077828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.077867] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.089556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.089620] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.101423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.101453] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.113655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.113681] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.125076] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.125106] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.136618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.136644] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.148465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.148496] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.160442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.160472] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.172898] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.172928] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.185130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.185160] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.197036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.197067] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.209228] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.209257] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.221003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.221046] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.232609] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.232635] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.244369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.244399] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.256087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.256118] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.267877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.267902] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.280020] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.280051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.292055] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.292087] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.304050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.304081] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.315057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.315088] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.326646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.326671] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.338105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.338135] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.349667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.349693] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.361234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.361264] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.118 [2024-04-16 12:41:17.372792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.118 [2024-04-16 12:41:17.372823] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.384430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.384462] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.398259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.398290] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.409280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.409309] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.421172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.421203] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.433085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.433116] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.445075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.445112] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.456330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.456362] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.467857] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.467887] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.481318] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.481348] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.492151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.492182] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.503791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.503817] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.515791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.515816] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.527122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.527152] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.539185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.539215] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.551364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.551393] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.563122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.563153] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.575397] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.575427] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.587491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.587521] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.599181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.599211] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.611175] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.611206] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.623266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.623296] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.634936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.634967] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.646739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.646764] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.658740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.658766] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.670855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.670880] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.682507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.682538] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.694457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.694488] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.706211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.706241] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.717549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.717589] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.729421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.729451] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.740967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.740997] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.752666] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.752692] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.764274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.764305] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.776347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.776378] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.790467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.790498] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.801804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.801830] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.813638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.813664] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.825301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.825333] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.837188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.837219] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.848864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.848894] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.860765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.860790] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.873973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.874005] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.885226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.885256] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.897112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.897142] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.908935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.908965] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.920311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.920340] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.932189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.932219] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.944129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.944160] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.956217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.956247] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.968508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.119 [2024-04-16 12:41:17.968538] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.119 [2024-04-16 12:41:17.980794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:17.980820] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:17.992796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:17.992822] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.004816] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.004855] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.016927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.016957] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.028759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.028785] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.040487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.040518] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.052294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.052325] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.064230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.064260] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.075842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.075886] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.087707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.087749] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.099748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.099774] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.111827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.111866] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.123759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.123784] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.135560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.135619] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.147631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.147657] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.159637] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.159663] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.120 [2024-04-16 12:41:18.171522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.120 [2024-04-16 12:41:18.171552] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.184764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.184790] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.196881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.196931] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.208813] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.208853] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.220792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.220818] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.232479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.232510] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.243971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.244002] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.255709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.255735] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.267623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.267649] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.281307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.281338] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.292005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.292036] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.304454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.304485] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.316434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.316464] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.328279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.328310] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.340326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.340367] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.352337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.352368] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.364561] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.364600] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.376210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.376240] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.388362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.388392] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.399496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.399527] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.411437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.411467] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.423190] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.423220] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.435047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.435077] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.380 [2024-04-16 12:41:18.447339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.380 [2024-04-16 12:41:18.447372] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.459262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.459294] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.470942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.470973] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.482933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.482964] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.494801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.494826] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.506877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.506903] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.518876] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.518901] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.531009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.531040] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.543372] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.543402] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.554744] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.554770] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.566470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.566509] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.578217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.578248] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.590029] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.590060] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.601581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.601622] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.614115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.614146] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.626450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.626480] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.638262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.638293] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.652070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.652101] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.663336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.663367] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.675151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.675182] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.686827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.686871] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.698618] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.698643] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.644 [2024-04-16 12:41:18.710748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.644 [2024-04-16 12:41:18.710777] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.902 [2024-04-16 12:41:18.723171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.902 [2024-04-16 12:41:18.723202] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.902 [2024-04-16 12:41:18.735261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.902 [2024-04-16 12:41:18.735292] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.902 [2024-04-16 12:41:18.747021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.747052] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.758638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.758663] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.770270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.770300] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.781948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.781980] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.793400] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.793439] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.807067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.807097] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.817276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.817307] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.829453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.829484] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.841342] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.841373] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.852922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.852952] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.864446] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.864476] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.876048] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.876079] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.887658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.887684] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.901474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.901504] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.912589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.912633] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.924279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.924310] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.935824] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.935863] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.947881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.947921] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.959650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.959676] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.903 [2024-04-16 12:41:18.971649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.903 [2024-04-16 12:41:18.971679] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:18.983647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:18.983675] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:18.995250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:18.995280] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.006898] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.006929] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.018313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.018351] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.031360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.031391] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.041726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.041752] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.053285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.053315] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.064904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.064935] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.076702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.076728] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.088326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.088355] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.100083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.100113] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.111816] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.111855] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.123764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.123789] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.135799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.135825] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.148042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.148073] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.160192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.161 [2024-04-16 12:41:19.160222] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.161 [2024-04-16 12:41:19.172258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.162 [2024-04-16 12:41:19.172288] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.162 [2024-04-16 12:41:19.183894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.162 [2024-04-16 12:41:19.183936] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.162 [2024-04-16 12:41:19.197225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.162 [2024-04-16 12:41:19.197256] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.162 [2024-04-16 12:41:19.207807] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.162 [2024-04-16 12:41:19.207833] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.162 [2024-04-16 12:41:19.220402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.162 [2024-04-16 12:41:19.220432] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.232713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.232756] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.244864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.244904] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.256823] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.256867] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.269266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.269296] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.281663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.281689] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.293409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.293440] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.305492] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.305523] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.317413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.317443] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.329766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.329793] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.341878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.341904] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.354149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.354179] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.365286] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.365311] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.378236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.378261] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.388044] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.388069] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.399347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.399372] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.411948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.411974] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.421997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.422034] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.433654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.433680] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.444584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.444610] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.455077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.455101] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.467052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.467077] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.476750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.476776] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.420 [2024-04-16 12:41:19.488586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.420 [2024-04-16 12:41:19.488622] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.499619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.499646] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.512252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.512277] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.522444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.522469] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.533559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.533594] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.543888] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.543913] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.553277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.553301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.564370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.564394] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.575276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.575301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.586125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.586150] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.596815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.596841] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.607302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.607326] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.618081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.618106] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.628766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.628793] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.638935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.638960] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.649853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.649878] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.660247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.660271] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.670826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.670867] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.683345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.683370] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.692617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.692643] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.704294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.704318] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.715278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.715302] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.726026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.726052] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.679 [2024-04-16 12:41:19.738290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.679 [2024-04-16 12:41:19.738314] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.748468] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.748496] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.758962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.758988] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.769995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.770021] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.781133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.781158] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.791643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.791671] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.802148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.802173] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.812523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.812571] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.824712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.824738] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.836703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.836738] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.848644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.848669] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.860659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.860685] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.872729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.872754] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.884699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.884729] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.896793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.896819] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.908713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.908738] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.920477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.920507] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.934086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.934116] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.944715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.944740] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.956386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.956415] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.968338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.968368] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.980247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.980277] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:19.992205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:19.992235] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.938 [2024-04-16 12:41:20.004234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.938 [2024-04-16 12:41:20.004265] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.016354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.016386] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.028607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.028636] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.040701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.040732] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.052846] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.052886] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.065261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.065295] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.077237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.077268] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.089890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.089930] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.102247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.102287] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.114325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.114356] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.126271] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.126301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.138723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.138749] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.150966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.150997] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.162863] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.162894] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.174529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.174560] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.186703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.186730] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.199192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.199223] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.212114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.212158] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.224506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.224537] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.236532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.236570] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.248439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.248470] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.197 [2024-04-16 12:41:20.260462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.197 [2024-04-16 12:41:20.260492] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.273166] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.273197] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.285192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.285222] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.297266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.297297] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.309289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.309319] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.321623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.321651] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.333944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.333984] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.346253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.346283] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.358262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.358292] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.369975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.370006] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.382354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.382385] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.394170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.394200] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.405685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.405711] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.417313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.417344] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.429246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.429277] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.441222] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.441252] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.453045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.453076] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.464904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.464949] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.477108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.477138] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.488206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.488237] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.500660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.500685] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.456 [2024-04-16 12:41:20.512886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.456 [2024-04-16 12:41:20.512917] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.525262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.525294] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.537520] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.537551] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.549529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.549559] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.561580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.561633] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.573629] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.573665] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.585630] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.585660] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.597839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.597879] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.609528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.609558] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.621473] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.621504] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.633294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.633325] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.645517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.645548] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.657959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.657990] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.669832] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.669874] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.681330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.681360] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.693516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.693547] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.705470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.705501] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.717206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.717236] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.729018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.729049] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.740587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.740628] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.714 [2024-04-16 12:41:20.752366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.714 [2024-04-16 12:41:20.752396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.715 [2024-04-16 12:41:20.763952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.715 [2024-04-16 12:41:20.763982] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.715 [2024-04-16 12:41:20.775577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.715 [2024-04-16 12:41:20.775628] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.787508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.787549] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.799498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.799530] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.813008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.813034] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.823683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.823711] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.835355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.835381] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.846530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.846579] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.857541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.857590] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.868433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.868458] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.879467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.879493] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.890243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.890268] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.902827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.902870] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.912764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.912791] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.925276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.925301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.935939] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.935963] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.947281] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.947306] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.958470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.958495] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.969169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.969194] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.979739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.979766] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:20.990395] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:20.990420] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:21.001136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:21.001169] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:21.013775] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:21.013802] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:21.023346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:21.023371] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.974 [2024-04-16 12:41:21.034444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.974 [2024-04-16 12:41:21.034470] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.045414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.045455] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.056007] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.056033] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.066621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.066648] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.077510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.077535] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.088364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.088389] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.098967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.098992] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.109517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.109542] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.120471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.120496] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.131097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.131122] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.142231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.142255] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.153626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.153652] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.164319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.164344] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.175040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.175066] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.185695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.185723] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.198350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.198384] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.208336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.208361] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.219888] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.219929] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.230916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.230941] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.241783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.241810] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.254497] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.254522] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.264261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.264286] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.275517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.275557] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.286225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.286250] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.233 [2024-04-16 12:41:21.297088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.233 [2024-04-16 12:41:21.297113] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.491 [2024-04-16 12:41:21.308690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.491 [2024-04-16 12:41:21.308718] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.491 [2024-04-16 12:41:21.320036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.491 [2024-04-16 12:41:21.320061] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.491 [2024-04-16 12:41:21.331017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.491 [2024-04-16 12:41:21.331041] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.491 [2024-04-16 12:41:21.342420] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.491 [2024-04-16 12:41:21.342445] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.491 [2024-04-16 12:41:21.353675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.491 [2024-04-16 12:41:21.353704] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.491 [2024-04-16 12:41:21.365900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.365931] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.377862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.377893] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.389484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.389514] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.401673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.401700] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.413248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.413279] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.424802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.424829] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.436630] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.436657] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.448408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.448447] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.464341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.464371] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.475370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.475401] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.487307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.487337] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.499204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.499235] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.511083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.511113] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.522535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.522574] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.534105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.534134] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.545959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.545990] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.492 [2024-04-16 12:41:21.557504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.492 [2024-04-16 12:41:21.557535] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.569733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.569760] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.581576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.581619] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.593398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.593429] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.605169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.605199] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.617236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.617267] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.628898] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.628929] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.640423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.640453] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.654118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.654148] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.664764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.664790] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.676778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.676803] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.688425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.688456] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.700297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.700327] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.712288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.712318] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.723834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.723878] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.735430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.735460] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.747410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.747440] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.757173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.757203] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 00:13:22.750 Latency(us) 00:13:22.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.750 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:22.750 Nvme1n1 : 5.01 10899.41 85.15 0.00 0.00 11727.94 4781.70 27767.85 00:13:22.750 =================================================================================================================== 00:13:22.750 Total : 10899.41 85.15 0.00 0.00 11727.94 4781.70 27767.85 00:13:22.750 [2024-04-16 12:41:21.761699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.761723] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.769763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.750 [2024-04-16 12:41:21.769788] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.750 [2024-04-16 12:41:21.777771] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.751 [2024-04-16 12:41:21.777793] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.751 [2024-04-16 12:41:21.785851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.751 [2024-04-16 12:41:21.785893] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.751 [2024-04-16 12:41:21.793865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.751 [2024-04-16 12:41:21.793910] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.751 [2024-04-16 12:41:21.801886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.751 [2024-04-16 12:41:21.801949] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.751 [2024-04-16 12:41:21.809935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.751 [2024-04-16 12:41:21.809982] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.751 [2024-04-16 12:41:21.817928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.751 [2024-04-16 12:41:21.817978] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.825965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.826017] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.833993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.834040] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.842008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.842052] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.850033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.850094] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.858056] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.858119] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.866098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.866148] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.874111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.874158] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.882125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.882169] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.890141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.890183] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.898174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.898219] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.906178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.906219] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.914170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.914195] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.922192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.922216] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.930211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.930235] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.938237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.938262] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.946283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.946337] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.954323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.954384] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.962342] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.962387] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.970320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.970344] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.978343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.978367] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.986367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.986391] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:21.994390] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:21.994414] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:22.002429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:22.002462] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:22.010464] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:22.010506] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:22.018495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:22.018540] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:22.026480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:22.026504] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:22.034503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:22.034528] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.010 [2024-04-16 12:41:22.042526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.010 [2024-04-16 12:41:22.042552] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1162387) - No such process 00:13:23.011 12:41:22 -- target/zcopy.sh@49 -- # wait 1162387 00:13:23.011 12:41:22 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.011 12:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:23.011 12:41:22 -- common/autotest_common.sh@10 -- # set +x 00:13:23.011 12:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:23.011 12:41:22 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:23.011 12:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:23.011 12:41:22 -- common/autotest_common.sh@10 -- # set +x 00:13:23.011 delay0 00:13:23.011 12:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:23.011 12:41:22 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:23.011 12:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:23.011 12:41:22 -- common/autotest_common.sh@10 -- # set +x 00:13:23.011 12:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:23.011 12:41:22 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:23.269 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.269 [2024-04-16 12:41:22.204720] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:29.826 Initializing NVMe Controllers 00:13:29.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:29.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:29.826 Initialization complete. Launching workers. 00:13:29.826 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 183 00:13:29.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 470, failed to submit 33 00:13:29.826 success 308, unsuccess 162, failed 0 00:13:29.826 12:41:28 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:29.826 12:41:28 -- target/zcopy.sh@60 -- # nvmftestfini 00:13:29.826 12:41:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:29.826 12:41:28 -- nvmf/common.sh@117 -- # sync 00:13:29.826 12:41:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.826 12:41:28 -- nvmf/common.sh@120 -- # set +e 00:13:29.826 12:41:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.826 12:41:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.826 rmmod nvme_tcp 00:13:29.826 rmmod nvme_fabrics 00:13:29.826 rmmod nvme_keyring 00:13:29.826 12:41:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.826 12:41:28 -- nvmf/common.sh@124 -- # set -e 00:13:29.826 12:41:28 -- nvmf/common.sh@125 -- # return 0 00:13:29.826 12:41:28 -- nvmf/common.sh@478 -- # '[' -n 1161052 ']' 00:13:29.826 12:41:28 -- nvmf/common.sh@479 -- # killprocess 1161052 00:13:29.826 12:41:28 -- common/autotest_common.sh@936 -- # '[' -z 1161052 ']' 00:13:29.826 12:41:28 -- common/autotest_common.sh@940 -- # kill -0 1161052 00:13:29.826 12:41:28 -- common/autotest_common.sh@941 -- # uname 00:13:29.826 12:41:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:29.826 12:41:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1161052 00:13:29.826 12:41:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:29.826 12:41:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:29.826 12:41:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1161052' 00:13:29.826 killing process with pid 1161052 00:13:29.826 12:41:28 -- common/autotest_common.sh@955 -- # kill 1161052 00:13:29.826 12:41:28 -- common/autotest_common.sh@960 -- # wait 1161052 00:13:29.826 12:41:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:29.826 12:41:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:29.826 12:41:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:29.826 12:41:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.826 12:41:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.826 12:41:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.826 12:41:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.826 12:41:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.729 12:41:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:31.729 00:13:31.729 real 0m28.358s 00:13:31.729 user 0m40.297s 00:13:31.729 sys 0m9.665s 00:13:31.729 12:41:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:31.729 12:41:30 -- common/autotest_common.sh@10 -- # set +x 00:13:31.729 ************************************ 00:13:31.729 END TEST nvmf_zcopy 00:13:31.729 ************************************ 00:13:31.729 12:41:30 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:31.729 12:41:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:31.729 12:41:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.729 12:41:30 -- common/autotest_common.sh@10 -- # set +x 00:13:31.990 ************************************ 00:13:31.990 START TEST nvmf_nmic 00:13:31.990 ************************************ 00:13:31.990 12:41:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:31.990 * Looking for test storage... 00:13:31.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.990 12:41:30 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.990 12:41:30 -- nvmf/common.sh@7 -- # uname -s 00:13:31.990 12:41:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.990 12:41:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.990 12:41:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.990 12:41:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.990 12:41:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.990 12:41:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.990 12:41:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.990 12:41:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.990 12:41:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.990 12:41:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.990 12:41:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:31.990 12:41:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:31.990 12:41:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.990 12:41:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.990 12:41:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.990 12:41:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.990 12:41:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.990 12:41:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.990 12:41:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.990 12:41:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.990 12:41:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.990 12:41:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.990 12:41:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.990 12:41:30 -- paths/export.sh@5 -- # export PATH 00:13:31.990 12:41:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.990 12:41:30 -- nvmf/common.sh@47 -- # : 0 00:13:31.990 12:41:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.990 12:41:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.990 12:41:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.990 12:41:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.990 12:41:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.990 12:41:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.990 12:41:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.990 12:41:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.990 12:41:30 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:31.990 12:41:30 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:31.990 12:41:30 -- target/nmic.sh@14 -- # nvmftestinit 00:13:31.990 12:41:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:31.990 12:41:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.990 12:41:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:31.990 12:41:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:31.990 12:41:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:31.990 12:41:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.990 12:41:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.990 12:41:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.990 12:41:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:31.990 12:41:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:31.990 12:41:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:31.990 12:41:30 -- common/autotest_common.sh@10 -- # set +x 00:13:34.522 12:41:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.522 12:41:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:34.522 12:41:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:34.522 12:41:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:34.522 12:41:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:34.522 12:41:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:34.522 12:41:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:34.522 12:41:33 -- nvmf/common.sh@295 -- # net_devs=() 00:13:34.522 12:41:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:34.522 12:41:33 -- nvmf/common.sh@296 -- # e810=() 00:13:34.522 12:41:33 -- nvmf/common.sh@296 -- # local -ga e810 00:13:34.522 12:41:33 -- nvmf/common.sh@297 -- # x722=() 00:13:34.522 12:41:33 -- nvmf/common.sh@297 -- # local -ga x722 00:13:34.522 12:41:33 -- nvmf/common.sh@298 -- # mlx=() 00:13:34.522 12:41:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:34.522 12:41:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.522 12:41:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:34.522 12:41:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:34.522 12:41:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:34.522 12:41:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.522 12:41:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:34.522 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:34.522 12:41:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.522 12:41:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:34.522 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:34.522 12:41:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:34.522 12:41:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.522 12:41:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.522 12:41:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:34.522 12:41:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.522 12:41:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:34.522 Found net devices under 0000:82:00.0: cvl_0_0 00:13:34.522 12:41:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.522 12:41:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.522 12:41:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.522 12:41:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:34.522 12:41:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.522 12:41:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:34.522 Found net devices under 0000:82:00.1: cvl_0_1 00:13:34.522 12:41:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.522 12:41:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:34.522 12:41:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:34.522 12:41:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:34.522 12:41:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.522 12:41:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.522 12:41:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.522 12:41:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:34.522 12:41:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.522 12:41:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.522 12:41:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:34.522 12:41:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.522 12:41:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.522 12:41:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:34.522 12:41:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:34.522 12:41:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.522 12:41:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.522 12:41:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.522 12:41:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.522 12:41:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:34.522 12:41:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.522 12:41:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.522 12:41:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.522 12:41:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:34.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:13:34.522 00:13:34.522 --- 10.0.0.2 ping statistics --- 00:13:34.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.522 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:34.522 12:41:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:13:34.522 00:13:34.522 --- 10.0.0.1 ping statistics --- 00:13:34.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.522 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:34.522 12:41:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.522 12:41:33 -- nvmf/common.sh@411 -- # return 0 00:13:34.522 12:41:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:34.522 12:41:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.522 12:41:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:34.522 12:41:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.522 12:41:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:34.522 12:41:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:34.522 12:41:33 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:34.522 12:41:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:34.522 12:41:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:34.522 12:41:33 -- common/autotest_common.sh@10 -- # set +x 00:13:34.522 12:41:33 -- nvmf/common.sh@470 -- # nvmfpid=1166071 00:13:34.522 12:41:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.522 12:41:33 -- nvmf/common.sh@471 -- # waitforlisten 1166071 00:13:34.522 12:41:33 -- common/autotest_common.sh@817 -- # '[' -z 1166071 ']' 00:13:34.522 12:41:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.522 12:41:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:34.522 12:41:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.522 12:41:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:34.522 12:41:33 -- common/autotest_common.sh@10 -- # set +x 00:13:34.780 [2024-04-16 12:41:33.610331] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:13:34.780 [2024-04-16 12:41:33.610416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.780 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.780 [2024-04-16 12:41:33.690692] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.780 [2024-04-16 12:41:33.810520] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.780 [2024-04-16 12:41:33.810575] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.780 [2024-04-16 12:41:33.810591] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.780 [2024-04-16 12:41:33.810619] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.780 [2024-04-16 12:41:33.810629] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.780 [2024-04-16 12:41:33.810686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.780 [2024-04-16 12:41:33.810713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.780 [2024-04-16 12:41:33.811049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.780 [2024-04-16 12:41:33.811053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.712 12:41:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:35.712 12:41:34 -- common/autotest_common.sh@850 -- # return 0 00:13:35.712 12:41:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:35.712 12:41:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 12:41:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.712 12:41:34 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 [2024-04-16 12:41:34.606476] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 Malloc0 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 [2024-04-16 12:41:34.657695] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:35.712 test case1: single bdev can't be used in multiple subsystems 00:13:35.712 12:41:34 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@28 -- # nmic_status=0 00:13:35.712 12:41:34 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 [2024-04-16 12:41:34.681538] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:35.712 [2024-04-16 12:41:34.681589] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:35.712 [2024-04-16 12:41:34.681604] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.712 request: 00:13:35.712 { 00:13:35.712 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:35.712 "namespace": { 00:13:35.712 "bdev_name": "Malloc0", 00:13:35.712 "no_auto_visible": false 00:13:35.712 }, 00:13:35.712 "method": "nvmf_subsystem_add_ns", 00:13:35.712 "req_id": 1 00:13:35.712 } 00:13:35.712 Got JSON-RPC error response 00:13:35.712 response: 00:13:35.712 { 00:13:35.712 "code": -32602, 00:13:35.712 "message": "Invalid parameters" 00:13:35.712 } 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@29 -- # nmic_status=1 00:13:35.712 12:41:34 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:35.712 12:41:34 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:35.712 Adding namespace failed - expected result. 00:13:35.712 12:41:34 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:35.712 test case2: host connect to nvmf target in multiple paths 00:13:35.712 12:41:34 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:35.712 12:41:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.712 12:41:34 -- common/autotest_common.sh@10 -- # set +x 00:13:35.712 [2024-04-16 12:41:34.689688] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:35.712 12:41:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.712 12:41:34 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.275 12:41:35 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:37.208 12:41:36 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.208 12:41:36 -- common/autotest_common.sh@1184 -- # local i=0 00:13:37.208 12:41:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.208 12:41:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:37.208 12:41:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:39.106 12:41:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:39.106 12:41:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:39.106 12:41:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.106 12:41:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:39.106 12:41:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.106 12:41:38 -- common/autotest_common.sh@1194 -- # return 0 00:13:39.106 12:41:38 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:39.106 [global] 00:13:39.106 thread=1 00:13:39.106 invalidate=1 00:13:39.106 rw=write 00:13:39.106 time_based=1 00:13:39.106 runtime=1 00:13:39.106 ioengine=libaio 00:13:39.106 direct=1 00:13:39.106 bs=4096 00:13:39.106 iodepth=1 00:13:39.106 norandommap=0 00:13:39.106 numjobs=1 00:13:39.106 00:13:39.106 verify_dump=1 00:13:39.106 verify_backlog=512 00:13:39.106 verify_state_save=0 00:13:39.106 do_verify=1 00:13:39.106 verify=crc32c-intel 00:13:39.106 [job0] 00:13:39.106 filename=/dev/nvme0n1 00:13:39.106 Could not set queue depth (nvme0n1) 00:13:39.363 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:39.363 fio-3.35 00:13:39.363 Starting 1 thread 00:13:40.735 00:13:40.735 job0: (groupid=0, jobs=1): err= 0: pid=1166729: Tue Apr 16 12:41:39 2024 00:13:40.735 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:13:40.735 slat (nsec): min=9203, max=49489, avg=28178.86, stdev=11663.92 00:13:40.735 clat (usec): min=368, max=41768, avg=39124.93, stdev=8658.66 00:13:40.735 lat (usec): min=384, max=41802, avg=39153.10, stdev=8661.50 00:13:40.735 clat percentiles (usec): 00:13:40.735 | 1.00th=[ 371], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:13:40.735 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:40.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:40.735 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:40.735 | 99.99th=[41681] 00:13:40.735 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:13:40.735 slat (usec): min=7, max=29665, avg=74.78, stdev=1310.34 00:13:40.735 clat (usec): min=161, max=566, avg=267.96, stdev=115.61 00:13:40.735 lat (usec): min=169, max=29920, avg=342.74, stdev=1315.80 00:13:40.735 clat percentiles (usec): 00:13:40.735 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 184], 00:13:40.735 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 227], 00:13:40.735 | 70.00th=[ 251], 80.00th=[ 437], 90.00th=[ 465], 95.00th=[ 486], 00:13:40.735 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 570], 00:13:40.735 | 99.99th=[ 570] 00:13:40.735 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:40.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:40.735 lat (usec) : 250=67.04%, 500=26.03%, 750=3.00% 00:13:40.735 lat (msec) : 50=3.93% 00:13:40.735 cpu : usr=0.87%, sys=0.87%, ctx=536, majf=0, minf=2 00:13:40.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:40.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.735 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:40.735 00:13:40.735 Run status group 0 (all jobs): 00:13:40.735 READ: bw=84.7KiB/s (86.7kB/s), 84.7KiB/s-84.7KiB/s (86.7kB/s-86.7kB/s), io=88.0KiB (90.1kB), run=1039-1039msec 00:13:40.735 WRITE: bw=1971KiB/s (2018kB/s), 1971KiB/s-1971KiB/s (2018kB/s-2018kB/s), io=2048KiB (2097kB), run=1039-1039msec 00:13:40.735 00:13:40.735 Disk stats (read/write): 00:13:40.735 nvme0n1: ios=44/512, merge=0/0, ticks=1684/142, in_queue=1826, util=98.60% 00:13:40.735 12:41:39 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:40.735 12:41:39 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.735 12:41:39 -- common/autotest_common.sh@1205 -- # local i=0 00:13:40.735 12:41:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:40.735 12:41:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.735 12:41:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:40.735 12:41:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.735 12:41:39 -- common/autotest_common.sh@1217 -- # return 0 00:13:40.735 12:41:39 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:40.735 12:41:39 -- target/nmic.sh@53 -- # nvmftestfini 00:13:40.735 12:41:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:40.735 12:41:39 -- nvmf/common.sh@117 -- # sync 00:13:40.735 12:41:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.735 12:41:39 -- nvmf/common.sh@120 -- # set +e 00:13:40.735 12:41:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.735 12:41:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.735 rmmod nvme_tcp 00:13:40.735 rmmod nvme_fabrics 00:13:40.735 rmmod nvme_keyring 00:13:40.735 12:41:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.735 12:41:39 -- nvmf/common.sh@124 -- # set -e 00:13:40.735 12:41:39 -- nvmf/common.sh@125 -- # return 0 00:13:40.735 12:41:39 -- nvmf/common.sh@478 -- # '[' -n 1166071 ']' 00:13:40.735 12:41:39 -- nvmf/common.sh@479 -- # killprocess 1166071 00:13:40.735 12:41:39 -- common/autotest_common.sh@936 -- # '[' -z 1166071 ']' 00:13:40.735 12:41:39 -- common/autotest_common.sh@940 -- # kill -0 1166071 00:13:40.735 12:41:39 -- common/autotest_common.sh@941 -- # uname 00:13:40.735 12:41:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:40.735 12:41:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1166071 00:13:40.736 12:41:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:40.736 12:41:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:40.736 12:41:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1166071' 00:13:40.736 killing process with pid 1166071 00:13:40.736 12:41:39 -- common/autotest_common.sh@955 -- # kill 1166071 00:13:40.736 12:41:39 -- common/autotest_common.sh@960 -- # wait 1166071 00:13:40.993 12:41:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:40.993 12:41:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:40.993 12:41:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:40.993 12:41:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.993 12:41:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.993 12:41:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.993 12:41:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.993 12:41:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.917 12:41:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:42.917 00:13:42.917 real 0m11.107s 00:13:42.917 user 0m25.545s 00:13:42.917 sys 0m2.596s 00:13:42.917 12:41:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:42.917 12:41:41 -- common/autotest_common.sh@10 -- # set +x 00:13:42.917 ************************************ 00:13:42.917 END TEST nvmf_nmic 00:13:42.917 ************************************ 00:13:43.185 12:41:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:43.185 12:41:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:43.185 12:41:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.185 12:41:41 -- common/autotest_common.sh@10 -- # set +x 00:13:43.185 ************************************ 00:13:43.185 START TEST nvmf_fio_target 00:13:43.185 ************************************ 00:13:43.185 12:41:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:43.185 * Looking for test storage... 00:13:43.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.185 12:41:42 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.185 12:41:42 -- nvmf/common.sh@7 -- # uname -s 00:13:43.185 12:41:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.185 12:41:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.185 12:41:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.185 12:41:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.185 12:41:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.185 12:41:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.185 12:41:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.185 12:41:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.185 12:41:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.185 12:41:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.185 12:41:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:43.185 12:41:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:43.185 12:41:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.185 12:41:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.185 12:41:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.185 12:41:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.185 12:41:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.185 12:41:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.185 12:41:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.185 12:41:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.185 12:41:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.185 12:41:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.185 12:41:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.185 12:41:42 -- paths/export.sh@5 -- # export PATH 00:13:43.185 12:41:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.185 12:41:42 -- nvmf/common.sh@47 -- # : 0 00:13:43.185 12:41:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.185 12:41:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.185 12:41:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.185 12:41:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.185 12:41:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.185 12:41:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.185 12:41:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.185 12:41:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.185 12:41:42 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.185 12:41:42 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.185 12:41:42 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.185 12:41:42 -- target/fio.sh@16 -- # nvmftestinit 00:13:43.185 12:41:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:43.185 12:41:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.185 12:41:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:43.185 12:41:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:43.185 12:41:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:43.185 12:41:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.185 12:41:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.185 12:41:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.185 12:41:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:43.185 12:41:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:43.185 12:41:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.185 12:41:42 -- common/autotest_common.sh@10 -- # set +x 00:13:45.718 12:41:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:45.718 12:41:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.718 12:41:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.718 12:41:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.718 12:41:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.718 12:41:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.718 12:41:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.718 12:41:44 -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.718 12:41:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.718 12:41:44 -- nvmf/common.sh@296 -- # e810=() 00:13:45.718 12:41:44 -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.718 12:41:44 -- nvmf/common.sh@297 -- # x722=() 00:13:45.718 12:41:44 -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.718 12:41:44 -- nvmf/common.sh@298 -- # mlx=() 00:13:45.718 12:41:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.718 12:41:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.718 12:41:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.718 12:41:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.718 12:41:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.718 12:41:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.718 12:41:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:13:45.718 Found 0000:82:00.0 (0x8086 - 0x159b) 00:13:45.718 12:41:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.718 12:41:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:13:45.718 Found 0000:82:00.1 (0x8086 - 0x159b) 00:13:45.718 12:41:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.718 12:41:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.718 12:41:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.718 12:41:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:45.718 12:41:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.718 12:41:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:13:45.718 Found net devices under 0000:82:00.0: cvl_0_0 00:13:45.718 12:41:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.718 12:41:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.718 12:41:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.718 12:41:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:45.718 12:41:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.718 12:41:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:13:45.718 Found net devices under 0000:82:00.1: cvl_0_1 00:13:45.718 12:41:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.718 12:41:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:45.718 12:41:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:45.718 12:41:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:45.718 12:41:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.718 12:41:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.718 12:41:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.718 12:41:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.718 12:41:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.718 12:41:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.718 12:41:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.718 12:41:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.718 12:41:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.718 12:41:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.718 12:41:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.718 12:41:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.718 12:41:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.718 12:41:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.718 12:41:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.718 12:41:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.718 12:41:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.718 12:41:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.718 12:41:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.718 12:41:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:45.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:13:45.718 00:13:45.718 --- 10.0.0.2 ping statistics --- 00:13:45.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.718 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:13:45.718 12:41:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:13:45.718 00:13:45.718 --- 10.0.0.1 ping statistics --- 00:13:45.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.718 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:13:45.718 12:41:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.718 12:41:44 -- nvmf/common.sh@411 -- # return 0 00:13:45.718 12:41:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:45.718 12:41:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.718 12:41:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:45.718 12:41:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:45.719 12:41:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.719 12:41:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:45.719 12:41:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:45.719 12:41:44 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:45.719 12:41:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:45.719 12:41:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:45.719 12:41:44 -- common/autotest_common.sh@10 -- # set +x 00:13:45.719 12:41:44 -- nvmf/common.sh@470 -- # nvmfpid=1169215 00:13:45.719 12:41:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.719 12:41:44 -- nvmf/common.sh@471 -- # waitforlisten 1169215 00:13:45.719 12:41:44 -- common/autotest_common.sh@817 -- # '[' -z 1169215 ']' 00:13:45.719 12:41:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.719 12:41:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:45.719 12:41:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.719 12:41:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:45.719 12:41:44 -- common/autotest_common.sh@10 -- # set +x 00:13:45.719 [2024-04-16 12:41:44.765127] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:13:45.719 [2024-04-16 12:41:44.765223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.978 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.978 [2024-04-16 12:41:44.846583] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.978 [2024-04-16 12:41:44.965135] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.978 [2024-04-16 12:41:44.965209] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.978 [2024-04-16 12:41:44.965226] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.978 [2024-04-16 12:41:44.965239] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.978 [2024-04-16 12:41:44.965261] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.978 [2024-04-16 12:41:44.965345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.978 [2024-04-16 12:41:44.965400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.978 [2024-04-16 12:41:44.965451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.978 [2024-04-16 12:41:44.965455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.912 12:41:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:46.912 12:41:45 -- common/autotest_common.sh@850 -- # return 0 00:13:46.912 12:41:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:46.912 12:41:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:46.912 12:41:45 -- common/autotest_common.sh@10 -- # set +x 00:13:46.912 12:41:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.912 12:41:45 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:46.912 [2024-04-16 12:41:45.917059] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.912 12:41:45 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.170 12:41:46 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:47.170 12:41:46 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.428 12:41:46 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:47.428 12:41:46 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.686 12:41:46 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:47.686 12:41:46 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.945 12:41:46 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:47.945 12:41:46 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:48.203 12:41:47 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.461 12:41:47 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:48.461 12:41:47 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.720 12:41:47 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:48.720 12:41:47 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.979 12:41:47 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:48.979 12:41:47 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:49.237 12:41:48 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.495 12:41:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:49.495 12:41:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:49.753 12:41:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:49.753 12:41:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.012 12:41:48 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.269 [2024-04-16 12:41:49.178913] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.269 12:41:49 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:50.527 12:41:49 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:50.785 12:41:49 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.351 12:41:50 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:51.351 12:41:50 -- common/autotest_common.sh@1184 -- # local i=0 00:13:51.351 12:41:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.351 12:41:50 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:51.351 12:41:50 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:51.351 12:41:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:53.880 12:41:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:53.880 12:41:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:53.880 12:41:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.880 12:41:52 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:53.880 12:41:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.880 12:41:52 -- common/autotest_common.sh@1194 -- # return 0 00:13:53.880 12:41:52 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:53.880 [global] 00:13:53.880 thread=1 00:13:53.880 invalidate=1 00:13:53.880 rw=write 00:13:53.880 time_based=1 00:13:53.880 runtime=1 00:13:53.880 ioengine=libaio 00:13:53.880 direct=1 00:13:53.880 bs=4096 00:13:53.880 iodepth=1 00:13:53.880 norandommap=0 00:13:53.880 numjobs=1 00:13:53.880 00:13:53.880 verify_dump=1 00:13:53.880 verify_backlog=512 00:13:53.880 verify_state_save=0 00:13:53.880 do_verify=1 00:13:53.880 verify=crc32c-intel 00:13:53.880 [job0] 00:13:53.880 filename=/dev/nvme0n1 00:13:53.880 [job1] 00:13:53.880 filename=/dev/nvme0n2 00:13:53.880 [job2] 00:13:53.880 filename=/dev/nvme0n3 00:13:53.880 [job3] 00:13:53.880 filename=/dev/nvme0n4 00:13:53.880 Could not set queue depth (nvme0n1) 00:13:53.880 Could not set queue depth (nvme0n2) 00:13:53.880 Could not set queue depth (nvme0n3) 00:13:53.880 Could not set queue depth (nvme0n4) 00:13:53.880 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.880 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.880 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.880 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:53.880 fio-3.35 00:13:53.880 Starting 4 threads 00:13:54.814 00:13:54.814 job0: (groupid=0, jobs=1): err= 0: pid=1170288: Tue Apr 16 12:41:53 2024 00:13:54.814 read: IOPS=1285, BW=5143KiB/s (5266kB/s)(5148KiB/1001msec) 00:13:54.814 slat (nsec): min=4768, max=66613, avg=16485.18, stdev=8980.23 00:13:54.814 clat (usec): min=243, max=3197, avg=409.91, stdev=126.27 00:13:54.814 lat (usec): min=252, max=3205, avg=426.39, stdev=131.01 00:13:54.814 clat percentiles (usec): 00:13:54.814 | 1.00th=[ 255], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 314], 00:13:54.814 | 30.00th=[ 334], 40.00th=[ 371], 50.00th=[ 392], 60.00th=[ 437], 00:13:54.814 | 70.00th=[ 465], 80.00th=[ 506], 90.00th=[ 545], 95.00th=[ 570], 00:13:54.814 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 1205], 99.95th=[ 3195], 00:13:54.814 | 99.99th=[ 3195] 00:13:54.814 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:54.814 slat (usec): min=6, max=14122, avg=26.25, stdev=360.07 00:13:54.814 clat (usec): min=157, max=1025, avg=259.15, stdev=80.78 00:13:54.814 lat (usec): min=164, max=14388, avg=285.40, stdev=370.63 00:13:54.814 clat percentiles (usec): 00:13:54.814 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 190], 00:13:54.814 | 30.00th=[ 202], 40.00th=[ 217], 50.00th=[ 229], 60.00th=[ 273], 00:13:54.814 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 367], 95.00th=[ 383], 00:13:54.814 | 99.00th=[ 474], 99.50th=[ 482], 99.90th=[ 816], 99.95th=[ 1029], 00:13:54.814 | 99.99th=[ 1029] 00:13:54.814 bw ( KiB/s): min= 8192, max= 8192, per=68.33%, avg=8192.00, stdev= 0.00, samples=1 00:13:54.814 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:54.814 lat (usec) : 250=31.56%, 500=58.48%, 750=9.74%, 1000=0.07% 00:13:54.814 lat (msec) : 2=0.11%, 4=0.04% 00:13:54.814 cpu : usr=2.40%, sys=5.00%, ctx=2826, majf=0, minf=2 00:13:54.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.814 issued rwts: total=1287,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.814 job1: (groupid=0, jobs=1): err= 0: pid=1170289: Tue Apr 16 12:41:53 2024 00:13:54.814 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:54.814 slat (nsec): min=5062, max=66913, avg=18085.32, stdev=11905.27 00:13:54.814 clat (usec): min=247, max=41513, avg=1703.69, stdev=6872.04 00:13:54.814 lat (usec): min=256, max=41521, avg=1721.77, stdev=6872.26 00:13:54.814 clat percentiles (usec): 00:13:54.814 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 289], 20.00th=[ 334], 00:13:54.814 | 30.00th=[ 363], 40.00th=[ 383], 50.00th=[ 433], 60.00th=[ 594], 00:13:54.814 | 70.00th=[ 660], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:13:54.814 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:54.814 | 99.99th=[41681] 00:13:54.814 write: IOPS=514, BW=2058KiB/s (2107kB/s)(2060KiB/1001msec); 0 zone resets 00:13:54.814 slat (nsec): min=7335, max=54741, avg=11061.34, stdev=2675.81 00:13:54.814 clat (usec): min=178, max=675, avg=210.22, stdev=36.38 00:13:54.814 lat (usec): min=188, max=688, avg=221.28, stdev=37.13 00:13:54.814 clat percentiles (usec): 00:13:54.814 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:13:54.814 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:13:54.814 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 265], 00:13:54.814 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 676], 99.95th=[ 676], 00:13:54.814 | 99.99th=[ 676] 00:13:54.814 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:13:54.814 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:54.814 lat (usec) : 250=47.22%, 500=30.96%, 750=19.96%, 1000=0.29% 00:13:54.814 lat (msec) : 20=0.10%, 50=1.46% 00:13:54.814 cpu : usr=0.70%, sys=2.00%, ctx=1028, majf=0, minf=1 00:13:54.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.814 issued rwts: total=512,515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.814 job2: (groupid=0, jobs=1): err= 0: pid=1170294: Tue Apr 16 12:41:53 2024 00:13:54.814 read: IOPS=22, BW=89.7KiB/s (91.8kB/s)(92.0KiB/1026msec) 00:13:54.814 slat (nsec): min=7244, max=43049, avg=23072.13, stdev=10773.52 00:13:54.814 clat (usec): min=438, max=41144, avg=39178.42, stdev=8446.36 00:13:54.814 lat (usec): min=457, max=41178, avg=39201.49, stdev=8447.38 00:13:54.814 clat percentiles (usec): 00:13:54.814 | 1.00th=[ 441], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:13:54.814 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:54.814 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:54.814 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:54.814 | 99.99th=[41157] 00:13:54.814 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:13:54.814 slat (nsec): min=6643, max=30627, avg=8566.74, stdev=2943.92 00:13:54.814 clat (usec): min=182, max=461, avg=232.25, stdev=42.07 00:13:54.814 lat (usec): min=189, max=480, avg=240.81, stdev=43.19 00:13:54.814 clat percentiles (usec): 00:13:54.814 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:13:54.814 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 227], 00:13:54.814 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 318], 00:13:54.814 | 99.00th=[ 420], 99.50th=[ 441], 99.90th=[ 461], 99.95th=[ 461], 00:13:54.814 | 99.99th=[ 461] 00:13:54.814 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:13:54.814 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:54.814 lat (usec) : 250=85.05%, 500=10.84% 00:13:54.814 lat (msec) : 50=4.11% 00:13:54.814 cpu : usr=0.00%, sys=0.59%, ctx=536, majf=0, minf=1 00:13:54.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.814 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.814 job3: (groupid=0, jobs=1): err= 0: pid=1170295: Tue Apr 16 12:41:53 2024 00:13:54.814 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:13:54.814 slat (nsec): min=7719, max=46791, avg=22530.68, stdev=11793.46 00:13:54.814 clat (usec): min=40855, max=41156, avg=40984.15, stdev=60.37 00:13:54.814 lat (usec): min=40891, max=41164, avg=41006.69, stdev=55.00 00:13:54.814 clat percentiles (usec): 00:13:54.814 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:54.814 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:54.814 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:54.814 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:54.814 | 99.99th=[41157] 00:13:54.814 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:13:54.814 slat (nsec): min=7736, max=24898, avg=10332.91, stdev=1734.46 00:13:54.814 clat (usec): min=185, max=448, avg=222.55, stdev=30.62 00:13:54.814 lat (usec): min=194, max=461, avg=232.89, stdev=31.39 00:13:54.814 clat percentiles (usec): 00:13:54.814 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:13:54.814 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:13:54.814 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 258], 95.00th=[ 285], 00:13:54.814 | 99.00th=[ 338], 99.50th=[ 375], 99.90th=[ 449], 99.95th=[ 449], 00:13:54.814 | 99.99th=[ 449] 00:13:54.814 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:13:54.814 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:54.814 lat (usec) : 250=84.27%, 500=11.61% 00:13:54.814 lat (msec) : 50=4.12% 00:13:54.814 cpu : usr=0.29%, sys=0.68%, ctx=536, majf=0, minf=1 00:13:54.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.814 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.814 00:13:54.814 Run status group 0 (all jobs): 00:13:54.814 READ: bw=7189KiB/s (7362kB/s), 86.0KiB/s-5143KiB/s (88.1kB/s-5266kB/s), io=7376KiB (7553kB), run=1001-1026msec 00:13:54.814 WRITE: bw=11.7MiB/s (12.3MB/s), 1996KiB/s-6138KiB/s (2044kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1026msec 00:13:54.814 00:13:54.814 Disk stats (read/write): 00:13:54.814 nvme0n1: ios=1076/1486, merge=0/0, ticks=825/358, in_queue=1183, util=97.39% 00:13:54.814 nvme0n2: ios=369/512, merge=0/0, ticks=1662/104, in_queue=1766, util=97.55% 00:13:54.814 nvme0n3: ios=75/512, merge=0/0, ticks=1020/114, in_queue=1134, util=97.79% 00:13:54.814 nvme0n4: ios=67/512, merge=0/0, ticks=947/109, in_queue=1056, util=97.88% 00:13:54.814 12:41:53 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:54.814 [global] 00:13:54.814 thread=1 00:13:54.814 invalidate=1 00:13:54.814 rw=randwrite 00:13:54.814 time_based=1 00:13:54.814 runtime=1 00:13:54.814 ioengine=libaio 00:13:54.814 direct=1 00:13:54.814 bs=4096 00:13:54.814 iodepth=1 00:13:54.814 norandommap=0 00:13:54.814 numjobs=1 00:13:54.814 00:13:54.814 verify_dump=1 00:13:54.814 verify_backlog=512 00:13:54.814 verify_state_save=0 00:13:54.814 do_verify=1 00:13:54.814 verify=crc32c-intel 00:13:55.073 [job0] 00:13:55.073 filename=/dev/nvme0n1 00:13:55.073 [job1] 00:13:55.073 filename=/dev/nvme0n2 00:13:55.073 [job2] 00:13:55.073 filename=/dev/nvme0n3 00:13:55.073 [job3] 00:13:55.073 filename=/dev/nvme0n4 00:13:55.073 Could not set queue depth (nvme0n1) 00:13:55.073 Could not set queue depth (nvme0n2) 00:13:55.073 Could not set queue depth (nvme0n3) 00:13:55.073 Could not set queue depth (nvme0n4) 00:13:55.073 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.073 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.073 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.073 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.073 fio-3.35 00:13:55.073 Starting 4 threads 00:13:56.447 00:13:56.447 job0: (groupid=0, jobs=1): err= 0: pid=1170566: Tue Apr 16 12:41:55 2024 00:13:56.447 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:13:56.447 slat (nsec): min=8256, max=19012, avg=15718.59, stdev=2501.12 00:13:56.447 clat (usec): min=40416, max=41052, avg=40953.24, stdev=127.39 00:13:56.447 lat (usec): min=40424, max=41066, avg=40968.96, stdev=128.91 00:13:56.447 clat percentiles (usec): 00:13:56.447 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:56.447 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:56.447 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:56.447 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:56.447 | 99.99th=[41157] 00:13:56.447 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:13:56.447 slat (nsec): min=7743, max=37166, avg=9429.23, stdev=1930.40 00:13:56.447 clat (usec): min=174, max=1020, avg=206.26, stdev=38.88 00:13:56.447 lat (usec): min=183, max=1030, avg=215.69, stdev=39.10 00:13:56.447 clat percentiles (usec): 00:13:56.447 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:13:56.447 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:13:56.447 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 231], 00:13:56.447 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 1020], 99.95th=[ 1020], 00:13:56.447 | 99.99th=[ 1020] 00:13:56.447 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.447 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.447 lat (usec) : 250=94.38%, 500=1.31% 00:13:56.447 lat (msec) : 2=0.19%, 50=4.12% 00:13:56.447 cpu : usr=0.20%, sys=0.69%, ctx=535, majf=0, minf=2 00:13:56.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.447 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.447 job1: (groupid=0, jobs=1): err= 0: pid=1170589: Tue Apr 16 12:41:55 2024 00:13:56.447 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:13:56.447 slat (nsec): min=6425, max=34728, avg=21718.43, stdev=9962.94 00:13:56.447 clat (usec): min=463, max=41067, avg=38514.20, stdev=8878.73 00:13:56.447 lat (usec): min=482, max=41086, avg=38535.92, stdev=8879.81 00:13:56.447 clat percentiles (usec): 00:13:56.447 | 1.00th=[ 465], 5.00th=[25822], 10.00th=[40633], 20.00th=[40633], 00:13:56.447 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:56.447 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:56.447 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:56.447 | 99.99th=[41157] 00:13:56.447 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:13:56.447 slat (nsec): min=6384, max=28539, avg=9494.63, stdev=2644.55 00:13:56.447 clat (usec): min=167, max=352, avg=215.61, stdev=30.86 00:13:56.447 lat (usec): min=173, max=364, avg=225.10, stdev=32.40 00:13:56.447 clat percentiles (usec): 00:13:56.447 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:13:56.447 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 219], 00:13:56.447 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 277], 00:13:56.447 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 355], 99.95th=[ 355], 00:13:56.447 | 99.99th=[ 355] 00:13:56.447 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.447 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.447 lat (usec) : 250=80.75%, 500=15.14% 00:13:56.447 lat (msec) : 50=4.11% 00:13:56.447 cpu : usr=0.20%, sys=0.60%, ctx=537, majf=0, minf=1 00:13:56.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.447 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.447 job2: (groupid=0, jobs=1): err= 0: pid=1170627: Tue Apr 16 12:41:55 2024 00:13:56.447 read: IOPS=636, BW=2545KiB/s (2607kB/s)(2548KiB/1001msec) 00:13:56.447 slat (nsec): min=6994, max=82335, avg=26330.76, stdev=11157.67 00:13:56.447 clat (usec): min=336, max=41115, avg=1033.36, stdev=4209.69 00:13:56.447 lat (usec): min=344, max=41129, avg=1059.69, stdev=4208.82 00:13:56.447 clat percentiles (usec): 00:13:56.447 | 1.00th=[ 347], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 408], 00:13:56.447 | 30.00th=[ 465], 40.00th=[ 498], 50.00th=[ 627], 60.00th=[ 717], 00:13:56.447 | 70.00th=[ 742], 80.00th=[ 758], 90.00th=[ 766], 95.00th=[ 783], 00:13:56.447 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:56.447 | 99.99th=[41157] 00:13:56.447 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:56.447 slat (nsec): min=7617, max=69483, avg=17977.68, stdev=11985.04 00:13:56.447 clat (usec): min=175, max=500, avg=290.29, stdev=91.85 00:13:56.447 lat (usec): min=185, max=540, avg=308.27, stdev=102.43 00:13:56.447 clat percentiles (usec): 00:13:56.447 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 215], 00:13:56.447 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 269], 00:13:56.447 | 70.00th=[ 347], 80.00th=[ 392], 90.00th=[ 445], 95.00th=[ 461], 00:13:56.447 | 99.00th=[ 478], 99.50th=[ 482], 99.90th=[ 486], 99.95th=[ 502], 00:13:56.447 | 99.99th=[ 502] 00:13:56.447 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.447 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.448 lat (usec) : 250=32.39%, 500=44.61%, 750=13.61%, 1000=8.97% 00:13:56.448 lat (msec) : 50=0.42% 00:13:56.448 cpu : usr=1.70%, sys=3.70%, ctx=1661, majf=0, minf=1 00:13:56.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.448 issued rwts: total=637,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.448 job3: (groupid=0, jobs=1): err= 0: pid=1170641: Tue Apr 16 12:41:55 2024 00:13:56.448 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:13:56.448 slat (nsec): min=8722, max=29986, avg=17325.41, stdev=6011.88 00:13:56.448 clat (usec): min=40915, max=41328, avg=40991.67, stdev=83.58 00:13:56.448 lat (usec): min=40930, max=41337, avg=41009.00, stdev=81.45 00:13:56.448 clat percentiles (usec): 00:13:56.448 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:56.448 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:56.448 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:56.448 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:56.448 | 99.99th=[41157] 00:13:56.448 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:13:56.448 slat (nsec): min=6426, max=43817, avg=8691.58, stdev=2557.59 00:13:56.448 clat (usec): min=172, max=426, avg=217.41, stdev=23.83 00:13:56.448 lat (usec): min=180, max=435, avg=226.10, stdev=24.76 00:13:56.448 clat percentiles (usec): 00:13:56.448 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:13:56.448 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:13:56.448 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 253], 00:13:56.448 | 99.00th=[ 281], 99.50th=[ 326], 99.90th=[ 429], 99.95th=[ 429], 00:13:56.448 | 99.99th=[ 429] 00:13:56.448 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:56.448 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:56.448 lat (usec) : 250=90.26%, 500=5.62% 00:13:56.448 lat (msec) : 50=4.12% 00:13:56.448 cpu : usr=0.29%, sys=0.39%, ctx=537, majf=0, minf=1 00:13:56.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.448 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.448 00:13:56.448 Run status group 0 (all jobs): 00:13:56.448 READ: bw=2761KiB/s (2827kB/s), 86.3KiB/s-2545KiB/s (88.3kB/s-2607kB/s), io=2816KiB (2884kB), run=1001-1020msec 00:13:56.448 WRITE: bw=9.80MiB/s (10.3MB/s), 2008KiB/s-4092KiB/s (2056kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1020msec 00:13:56.448 00:13:56.448 Disk stats (read/write): 00:13:56.448 nvme0n1: ios=58/512, merge=0/0, ticks=888/102, in_queue=990, util=98.90% 00:13:56.448 nvme0n2: ios=61/512, merge=0/0, ticks=1425/106, in_queue=1531, util=96.73% 00:13:56.448 nvme0n3: ios=529/699, merge=0/0, ticks=727/194, in_queue=921, util=92.90% 00:13:56.448 nvme0n4: ios=40/512, merge=0/0, ticks=1642/110, in_queue=1752, util=95.92% 00:13:56.448 12:41:55 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:56.448 [global] 00:13:56.448 thread=1 00:13:56.448 invalidate=1 00:13:56.448 rw=write 00:13:56.448 time_based=1 00:13:56.448 runtime=1 00:13:56.448 ioengine=libaio 00:13:56.448 direct=1 00:13:56.448 bs=4096 00:13:56.448 iodepth=128 00:13:56.448 norandommap=0 00:13:56.448 numjobs=1 00:13:56.448 00:13:56.448 verify_dump=1 00:13:56.448 verify_backlog=512 00:13:56.448 verify_state_save=0 00:13:56.448 do_verify=1 00:13:56.448 verify=crc32c-intel 00:13:56.448 [job0] 00:13:56.448 filename=/dev/nvme0n1 00:13:56.448 [job1] 00:13:56.448 filename=/dev/nvme0n2 00:13:56.448 [job2] 00:13:56.448 filename=/dev/nvme0n3 00:13:56.448 [job3] 00:13:56.448 filename=/dev/nvme0n4 00:13:56.448 Could not set queue depth (nvme0n1) 00:13:56.448 Could not set queue depth (nvme0n2) 00:13:56.448 Could not set queue depth (nvme0n3) 00:13:56.448 Could not set queue depth (nvme0n4) 00:13:56.706 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:56.706 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:56.706 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:56.706 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:56.706 fio-3.35 00:13:56.706 Starting 4 threads 00:13:58.079 00:13:58.079 job0: (groupid=0, jobs=1): err= 0: pid=1170872: Tue Apr 16 12:41:56 2024 00:13:58.079 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:13:58.079 slat (usec): min=3, max=7196, avg=106.19, stdev=583.79 00:13:58.079 clat (usec): min=6819, max=26149, avg=13690.57, stdev=3350.68 00:13:58.079 lat (usec): min=6825, max=26163, avg=13796.75, stdev=3387.15 00:13:58.079 clat percentiles (usec): 00:13:58.079 | 1.00th=[ 8094], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10945], 00:13:58.079 | 30.00th=[11469], 40.00th=[11994], 50.00th=[13173], 60.00th=[14484], 00:13:58.079 | 70.00th=[15139], 80.00th=[16057], 90.00th=[18482], 95.00th=[19792], 00:13:58.079 | 99.00th=[23462], 99.50th=[25297], 99.90th=[25822], 99.95th=[25822], 00:13:58.079 | 99.99th=[26084] 00:13:58.079 write: IOPS=4150, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1005msec); 0 zone resets 00:13:58.080 slat (usec): min=5, max=9856, avg=120.97, stdev=528.00 00:13:58.080 clat (usec): min=3946, max=52740, avg=16994.94, stdev=9090.10 00:13:58.080 lat (usec): min=4221, max=52827, avg=17115.91, stdev=9136.15 00:13:58.080 clat percentiles (usec): 00:13:58.080 | 1.00th=[ 6128], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11076], 00:13:58.080 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12387], 60.00th=[15401], 00:13:58.080 | 70.00th=[17957], 80.00th=[23725], 90.00th=[31065], 95.00th=[34341], 00:13:58.080 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:13:58.080 | 99.99th=[52691] 00:13:58.080 bw ( KiB/s): min=14904, max=17864, per=28.11%, avg=16384.00, stdev=2093.04, samples=2 00:13:58.080 iops : min= 3726, max= 4466, avg=4096.00, stdev=523.26, samples=2 00:13:58.080 lat (msec) : 4=0.01%, 10=9.14%, 20=75.34%, 50=14.55%, 100=0.96% 00:13:58.080 cpu : usr=3.19%, sys=10.06%, ctx=511, majf=0, minf=17 00:13:58.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:58.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.080 issued rwts: total=4096,4171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.080 job1: (groupid=0, jobs=1): err= 0: pid=1170873: Tue Apr 16 12:41:56 2024 00:13:58.080 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:13:58.080 slat (usec): min=2, max=11800, avg=115.15, stdev=780.82 00:13:58.080 clat (usec): min=2121, max=41010, avg=14879.54, stdev=6175.56 00:13:58.080 lat (usec): min=2124, max=41024, avg=14994.69, stdev=6241.04 00:13:58.080 clat percentiles (usec): 00:13:58.080 | 1.00th=[ 4047], 5.00th=[ 6390], 10.00th=[ 8029], 20.00th=[10159], 00:13:58.080 | 30.00th=[10945], 40.00th=[11469], 50.00th=[13042], 60.00th=[15401], 00:13:58.080 | 70.00th=[17957], 80.00th=[20055], 90.00th=[23462], 95.00th=[27395], 00:13:58.080 | 99.00th=[32900], 99.50th=[33424], 99.90th=[33424], 99.95th=[36963], 00:13:58.080 | 99.99th=[41157] 00:13:58.080 write: IOPS=4722, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1001msec); 0 zone resets 00:13:58.080 slat (usec): min=3, max=9075, avg=89.17, stdev=544.19 00:13:58.080 clat (usec): min=533, max=36830, avg=12379.95, stdev=4192.49 00:13:58.080 lat (usec): min=727, max=36835, avg=12469.12, stdev=4228.86 00:13:58.080 clat percentiles (usec): 00:13:58.080 | 1.00th=[ 2999], 5.00th=[ 6063], 10.00th=[ 7373], 20.00th=[ 9372], 00:13:58.080 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11863], 60.00th=[13042], 00:13:58.080 | 70.00th=[13698], 80.00th=[16057], 90.00th=[17957], 95.00th=[20055], 00:13:58.080 | 99.00th=[23200], 99.50th=[25560], 99.90th=[25560], 99.95th=[28705], 00:13:58.080 | 99.99th=[36963] 00:13:58.080 bw ( KiB/s): min=16432, max=20480, per=31.67%, avg=18456.00, stdev=2862.37, samples=2 00:13:58.080 iops : min= 4108, max= 5120, avg=4614.00, stdev=715.59, samples=2 00:13:58.080 lat (usec) : 750=0.04%, 1000=0.17% 00:13:58.080 lat (msec) : 2=0.10%, 4=1.03%, 10=20.97%, 20=65.24%, 50=12.45% 00:13:58.080 cpu : usr=3.40%, sys=4.80%, ctx=383, majf=0, minf=9 00:13:58.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:58.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.080 issued rwts: total=4608,4727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.080 job2: (groupid=0, jobs=1): err= 0: pid=1170875: Tue Apr 16 12:41:56 2024 00:13:58.080 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:13:58.080 slat (usec): min=2, max=24994, avg=183.61, stdev=1306.60 00:13:58.080 clat (usec): min=7318, max=58276, avg=24212.94, stdev=9474.80 00:13:58.080 lat (usec): min=7327, max=58282, avg=24396.54, stdev=9526.34 00:13:58.080 clat percentiles (usec): 00:13:58.080 | 1.00th=[10814], 5.00th=[14222], 10.00th=[15139], 20.00th=[15533], 00:13:58.080 | 30.00th=[16909], 40.00th=[21103], 50.00th=[22938], 60.00th=[23462], 00:13:58.080 | 70.00th=[26084], 80.00th=[33817], 90.00th=[36439], 95.00th=[43779], 00:13:58.080 | 99.00th=[57934], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:13:58.080 | 99.99th=[58459] 00:13:58.080 write: IOPS=2845, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1009msec); 0 zone resets 00:13:58.080 slat (usec): min=4, max=27357, avg=176.39, stdev=1153.06 00:13:58.080 clat (usec): min=8544, max=69613, avg=22780.24, stdev=12095.43 00:13:58.080 lat (usec): min=9956, max=69625, avg=22956.63, stdev=12168.38 00:13:58.080 clat percentiles (usec): 00:13:58.080 | 1.00th=[10421], 5.00th=[12518], 10.00th=[13173], 20.00th=[14091], 00:13:58.080 | 30.00th=[15270], 40.00th=[16319], 50.00th=[17957], 60.00th=[22938], 00:13:58.080 | 70.00th=[24511], 80.00th=[25297], 90.00th=[40633], 95.00th=[53740], 00:13:58.080 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:13:58.080 | 99.99th=[69731] 00:13:58.080 bw ( KiB/s): min= 9472, max=12480, per=18.83%, avg=10976.00, stdev=2126.98, samples=2 00:13:58.080 iops : min= 2368, max= 3120, avg=2744.00, stdev=531.74, samples=2 00:13:58.080 lat (msec) : 10=0.50%, 20=45.87%, 50=50.12%, 100=3.52% 00:13:58.080 cpu : usr=2.88%, sys=4.66%, ctx=220, majf=0, minf=11 00:13:58.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:58.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.080 issued rwts: total=2560,2871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.080 job3: (groupid=0, jobs=1): err= 0: pid=1170876: Tue Apr 16 12:41:56 2024 00:13:58.080 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:13:58.080 slat (usec): min=3, max=16689, avg=156.58, stdev=998.08 00:13:58.080 clat (usec): min=6962, max=55152, avg=17870.83, stdev=7168.89 00:13:58.080 lat (usec): min=6977, max=55160, avg=18027.41, stdev=7245.14 00:13:58.080 clat percentiles (usec): 00:13:58.080 | 1.00th=[10945], 5.00th=[12256], 10.00th=[13042], 20.00th=[13698], 00:13:58.080 | 30.00th=[14091], 40.00th=[14484], 50.00th=[15533], 60.00th=[16909], 00:13:58.080 | 70.00th=[17433], 80.00th=[18744], 90.00th=[27657], 95.00th=[33817], 00:13:58.080 | 99.00th=[47973], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:13:58.080 | 99.99th=[55313] 00:13:58.080 write: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.7MiB/1014msec); 0 zone resets 00:13:58.080 slat (usec): min=4, max=20699, avg=192.36, stdev=904.49 00:13:58.080 clat (usec): min=4369, max=64408, avg=26657.71, stdev=12882.46 00:13:58.080 lat (usec): min=4377, max=64417, avg=26850.07, stdev=12973.41 00:13:58.080 clat percentiles (usec): 00:13:58.080 | 1.00th=[ 6718], 5.00th=[11076], 10.00th=[13173], 20.00th=[15139], 00:13:58.080 | 30.00th=[20317], 40.00th=[23462], 50.00th=[24511], 60.00th=[25035], 00:13:58.080 | 70.00th=[27132], 80.00th=[33817], 90.00th=[51119], 95.00th=[54264], 00:13:58.080 | 99.00th=[58983], 99.50th=[59507], 99.90th=[64226], 99.95th=[64226], 00:13:58.080 | 99.99th=[64226] 00:13:58.080 bw ( KiB/s): min=10368, max=12681, per=19.77%, avg=11524.50, stdev=1635.54, samples=2 00:13:58.080 iops : min= 2592, max= 3170, avg=2881.00, stdev=408.71, samples=2 00:13:58.080 lat (msec) : 10=2.93%, 20=50.41%, 50=40.05%, 100=6.61% 00:13:58.080 cpu : usr=2.76%, sys=4.84%, ctx=370, majf=0, minf=13 00:13:58.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:58.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.080 issued rwts: total=2560,3006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.080 00:13:58.080 Run status group 0 (all jobs): 00:13:58.080 READ: bw=53.3MiB/s (55.8MB/s), 9.86MiB/s-18.0MiB/s (10.3MB/s-18.9MB/s), io=54.0MiB (56.6MB), run=1001-1014msec 00:13:58.080 WRITE: bw=56.9MiB/s (59.7MB/s), 11.1MiB/s-18.4MiB/s (11.7MB/s-19.3MB/s), io=57.7MiB (60.5MB), run=1001-1014msec 00:13:58.080 00:13:58.080 Disk stats (read/write): 00:13:58.080 nvme0n1: ios=3094/3527, merge=0/0, ticks=22298/29728, in_queue=52026, util=85.37% 00:13:58.080 nvme0n2: ios=3759/4096, merge=0/0, ticks=29823/30459, in_queue=60282, util=90.96% 00:13:58.080 nvme0n3: ios=2105/2519, merge=0/0, ticks=20180/19890, in_queue=40070, util=93.53% 00:13:58.080 nvme0n4: ios=2108/2528, merge=0/0, ticks=35046/66835, in_queue=101881, util=96.21% 00:13:58.080 12:41:56 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:58.080 [global] 00:13:58.080 thread=1 00:13:58.080 invalidate=1 00:13:58.080 rw=randwrite 00:13:58.080 time_based=1 00:13:58.080 runtime=1 00:13:58.080 ioengine=libaio 00:13:58.080 direct=1 00:13:58.080 bs=4096 00:13:58.080 iodepth=128 00:13:58.080 norandommap=0 00:13:58.080 numjobs=1 00:13:58.080 00:13:58.080 verify_dump=1 00:13:58.080 verify_backlog=512 00:13:58.080 verify_state_save=0 00:13:58.080 do_verify=1 00:13:58.080 verify=crc32c-intel 00:13:58.080 [job0] 00:13:58.080 filename=/dev/nvme0n1 00:13:58.080 [job1] 00:13:58.080 filename=/dev/nvme0n2 00:13:58.080 [job2] 00:13:58.080 filename=/dev/nvme0n3 00:13:58.080 [job3] 00:13:58.080 filename=/dev/nvme0n4 00:13:58.080 Could not set queue depth (nvme0n1) 00:13:58.080 Could not set queue depth (nvme0n2) 00:13:58.080 Could not set queue depth (nvme0n3) 00:13:58.080 Could not set queue depth (nvme0n4) 00:13:58.080 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.080 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.080 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.080 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.080 fio-3.35 00:13:58.080 Starting 4 threads 00:13:59.452 00:13:59.452 job0: (groupid=0, jobs=1): err= 0: pid=1171106: Tue Apr 16 12:41:58 2024 00:13:59.452 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:13:59.452 slat (usec): min=2, max=41849, avg=147.23, stdev=1401.09 00:13:59.452 clat (usec): min=3484, max=80623, avg=20462.92, stdev=15502.25 00:13:59.452 lat (usec): min=3502, max=82714, avg=20610.15, stdev=15600.11 00:13:59.452 clat percentiles (usec): 00:13:59.452 | 1.00th=[ 6194], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[10290], 00:13:59.452 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11994], 60.00th=[13960], 00:13:59.452 | 70.00th=[21890], 80.00th=[36439], 90.00th=[46400], 95.00th=[53216], 00:13:59.452 | 99.00th=[67634], 99.50th=[72877], 99.90th=[77071], 99.95th=[77071], 00:13:59.452 | 99.99th=[80217] 00:13:59.452 write: IOPS=3965, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1003msec); 0 zone resets 00:13:59.452 slat (usec): min=3, max=20734, avg=102.01, stdev=883.05 00:13:59.452 clat (usec): min=833, max=59712, avg=13535.89, stdev=6799.21 00:13:59.452 lat (usec): min=840, max=59844, avg=13637.90, stdev=6852.99 00:13:59.452 clat percentiles (usec): 00:13:59.452 | 1.00th=[ 3687], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 9110], 00:13:59.452 | 30.00th=[10683], 40.00th=[11600], 50.00th=[11994], 60.00th=[13173], 00:13:59.452 | 70.00th=[14746], 80.00th=[16188], 90.00th=[18482], 95.00th=[22938], 00:13:59.452 | 99.00th=[45876], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:13:59.452 | 99.99th=[59507] 00:13:59.452 bw ( KiB/s): min=13397, max=17848, per=26.11%, avg=15622.50, stdev=3147.33, samples=2 00:13:59.452 iops : min= 3349, max= 4466, avg=3907.50, stdev=789.84, samples=2 00:13:59.452 lat (usec) : 1000=0.03% 00:13:59.452 lat (msec) : 2=0.11%, 4=0.69%, 10=20.61%, 20=57.49%, 50=17.83% 00:13:59.452 lat (msec) : 100=3.25% 00:13:59.452 cpu : usr=2.30%, sys=5.49%, ctx=224, majf=0, minf=15 00:13:59.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:59.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.452 issued rwts: total=3584,3977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.452 job1: (groupid=0, jobs=1): err= 0: pid=1171107: Tue Apr 16 12:41:58 2024 00:13:59.452 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:13:59.452 slat (usec): min=2, max=28068, avg=179.29, stdev=1401.06 00:13:59.452 clat (usec): min=5631, max=81206, avg=22542.23, stdev=16600.27 00:13:59.452 lat (usec): min=5638, max=81212, avg=22721.52, stdev=16724.32 00:13:59.452 clat percentiles (usec): 00:13:59.452 | 1.00th=[ 5866], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[10814], 00:13:59.452 | 30.00th=[11207], 40.00th=[12387], 50.00th=[16909], 60.00th=[20317], 00:13:59.452 | 70.00th=[22938], 80.00th=[31589], 90.00th=[50594], 95.00th=[57410], 00:13:59.452 | 99.00th=[78119], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:13:59.452 | 99.99th=[81265] 00:13:59.452 write: IOPS=3428, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1006msec); 0 zone resets 00:13:59.452 slat (usec): min=3, max=13810, avg=123.84, stdev=835.29 00:13:59.452 clat (usec): min=4685, max=76985, avg=16843.67, stdev=10690.41 00:13:59.452 lat (usec): min=5898, max=76993, avg=16967.51, stdev=10767.43 00:13:59.452 clat percentiles (usec): 00:13:59.452 | 1.00th=[ 6783], 5.00th=[ 8160], 10.00th=[ 9241], 20.00th=[10290], 00:13:59.452 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12518], 60.00th=[14091], 00:13:59.452 | 70.00th=[17171], 80.00th=[22152], 90.00th=[27657], 95.00th=[39060], 00:13:59.452 | 99.00th=[55313], 99.50th=[61604], 99.90th=[72877], 99.95th=[77071], 00:13:59.452 | 99.99th=[77071] 00:13:59.452 bw ( KiB/s): min=12239, max=14288, per=22.17%, avg=13263.50, stdev=1448.86, samples=2 00:13:59.452 iops : min= 3059, max= 3572, avg=3315.50, stdev=362.75, samples=2 00:13:59.452 lat (msec) : 10=13.14%, 20=53.80%, 50=25.56%, 100=7.50% 00:13:59.452 cpu : usr=1.99%, sys=4.78%, ctx=244, majf=0, minf=9 00:13:59.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:59.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.452 issued rwts: total=3072,3449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.452 job2: (groupid=0, jobs=1): err= 0: pid=1171108: Tue Apr 16 12:41:58 2024 00:13:59.452 read: IOPS=3987, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1002msec) 00:13:59.452 slat (usec): min=2, max=35768, avg=122.90, stdev=940.61 00:13:59.452 clat (usec): min=931, max=70275, avg=16506.24, stdev=8973.13 00:13:59.452 lat (usec): min=3084, max=78107, avg=16629.15, stdev=9021.14 00:13:59.453 clat percentiles (usec): 00:13:59.453 | 1.00th=[ 4883], 5.00th=[ 8586], 10.00th=[10290], 20.00th=[11731], 00:13:59.453 | 30.00th=[12518], 40.00th=[13173], 50.00th=[14484], 60.00th=[15139], 00:13:59.453 | 70.00th=[16188], 80.00th=[18482], 90.00th=[26870], 95.00th=[34866], 00:13:59.453 | 99.00th=[58459], 99.50th=[65799], 99.90th=[69731], 99.95th=[69731], 00:13:59.453 | 99.99th=[70779] 00:13:59.453 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:13:59.453 slat (usec): min=3, max=25661, avg=98.75, stdev=713.04 00:13:59.453 clat (usec): min=596, max=45696, avg=14965.93, stdev=6798.34 00:13:59.453 lat (usec): min=612, max=45709, avg=15064.68, stdev=6826.43 00:13:59.453 clat percentiles (usec): 00:13:59.453 | 1.00th=[ 2278], 5.00th=[ 8848], 10.00th=[10552], 20.00th=[11731], 00:13:59.453 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13698], 60.00th=[14353], 00:13:59.453 | 70.00th=[14746], 80.00th=[15664], 90.00th=[21627], 95.00th=[31065], 00:13:59.453 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:59.453 | 99.99th=[45876] 00:13:59.453 bw ( KiB/s): min=18483, max=18483, per=30.89%, avg=18483.00, stdev= 0.00, samples=1 00:13:59.453 iops : min= 4620, max= 4620, avg=4620.00, stdev= 0.00, samples=1 00:13:59.453 lat (usec) : 750=0.04%, 1000=0.10% 00:13:59.453 lat (msec) : 2=0.20%, 4=0.87%, 10=6.61%, 20=78.37%, 50=13.24% 00:13:59.453 lat (msec) : 100=0.58% 00:13:59.453 cpu : usr=2.70%, sys=5.89%, ctx=406, majf=0, minf=13 00:13:59.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:59.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.453 issued rwts: total=3995,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.453 job3: (groupid=0, jobs=1): err= 0: pid=1171109: Tue Apr 16 12:41:58 2024 00:13:59.453 read: IOPS=3142, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1010msec) 00:13:59.453 slat (usec): min=2, max=13101, avg=126.35, stdev=947.46 00:13:59.453 clat (usec): min=3419, max=46115, avg=17355.85, stdev=5846.40 00:13:59.453 lat (usec): min=3428, max=46126, avg=17482.20, stdev=5907.88 00:13:59.453 clat percentiles (usec): 00:13:59.453 | 1.00th=[ 4424], 5.00th=[10290], 10.00th=[12518], 20.00th=[13960], 00:13:59.453 | 30.00th=[14746], 40.00th=[15926], 50.00th=[16581], 60.00th=[17171], 00:13:59.453 | 70.00th=[18220], 80.00th=[19792], 90.00th=[23725], 95.00th=[29754], 00:13:59.453 | 99.00th=[38536], 99.50th=[38536], 99.90th=[45876], 99.95th=[45876], 00:13:59.453 | 99.99th=[45876] 00:13:59.453 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:13:59.453 slat (usec): min=3, max=30719, avg=121.94, stdev=958.63 00:13:59.453 clat (usec): min=1238, max=109598, avg=20371.12, stdev=12924.24 00:13:59.453 lat (usec): min=1248, max=109605, avg=20493.07, stdev=12985.31 00:13:59.453 clat percentiles (msec): 00:13:59.453 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:13:59.453 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 21], 00:13:59.453 | 70.00th=[ 27], 80.00th=[ 30], 90.00th=[ 35], 95.00th=[ 42], 00:13:59.453 | 99.00th=[ 74], 99.50th=[ 90], 99.90th=[ 110], 99.95th=[ 110], 00:13:59.453 | 99.99th=[ 110] 00:13:59.453 bw ( KiB/s): min=12088, max=16318, per=23.74%, avg=14203.00, stdev=2991.06, samples=2 00:13:59.453 iops : min= 3022, max= 4079, avg=3550.50, stdev=747.41, samples=2 00:13:59.453 lat (msec) : 2=0.06%, 4=0.65%, 10=8.88%, 20=59.46%, 50=29.82% 00:13:59.453 lat (msec) : 100=1.05%, 250=0.09% 00:13:59.453 cpu : usr=3.67%, sys=5.05%, ctx=270, majf=0, minf=17 00:13:59.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:59.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.453 issued rwts: total=3174,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.453 00:13:59.453 Run status group 0 (all jobs): 00:13:59.453 READ: bw=53.5MiB/s (56.1MB/s), 11.9MiB/s-15.6MiB/s (12.5MB/s-16.3MB/s), io=54.0MiB (56.6MB), run=1002-1010msec 00:13:59.453 WRITE: bw=58.4MiB/s (61.3MB/s), 13.4MiB/s-16.0MiB/s (14.0MB/s-16.7MB/s), io=59.0MiB (61.9MB), run=1002-1010msec 00:13:59.453 00:13:59.453 Disk stats (read/write): 00:13:59.453 nvme0n1: ios=2859/3072, merge=0/0, ticks=33567/26592, in_queue=60159, util=78.76% 00:13:59.453 nvme0n2: ios=2539/2560, merge=0/0, ticks=24067/17879, in_queue=41946, util=85.83% 00:13:59.453 nvme0n3: ios=3129/3560, merge=0/0, ticks=27008/23546, in_queue=50554, util=91.36% 00:13:59.453 nvme0n4: ios=2623/3072, merge=0/0, ticks=41676/52594, in_queue=94270, util=99.67% 00:13:59.453 12:41:58 -- target/fio.sh@55 -- # sync 00:13:59.453 12:41:58 -- target/fio.sh@59 -- # fio_pid=1171247 00:13:59.453 12:41:58 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:59.453 12:41:58 -- target/fio.sh@61 -- # sleep 3 00:13:59.453 [global] 00:13:59.453 thread=1 00:13:59.453 invalidate=1 00:13:59.453 rw=read 00:13:59.453 time_based=1 00:13:59.453 runtime=10 00:13:59.453 ioengine=libaio 00:13:59.453 direct=1 00:13:59.453 bs=4096 00:13:59.453 iodepth=1 00:13:59.453 norandommap=1 00:13:59.453 numjobs=1 00:13:59.453 00:13:59.453 [job0] 00:13:59.453 filename=/dev/nvme0n1 00:13:59.453 [job1] 00:13:59.453 filename=/dev/nvme0n2 00:13:59.453 [job2] 00:13:59.453 filename=/dev/nvme0n3 00:13:59.453 [job3] 00:13:59.453 filename=/dev/nvme0n4 00:13:59.453 Could not set queue depth (nvme0n1) 00:13:59.453 Could not set queue depth (nvme0n2) 00:13:59.453 Could not set queue depth (nvme0n3) 00:13:59.453 Could not set queue depth (nvme0n4) 00:13:59.714 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.714 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.714 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.714 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:59.714 fio-3.35 00:13:59.714 Starting 4 threads 00:14:02.239 12:42:01 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:02.802 12:42:01 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:02.802 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3551232, buflen=4096 00:14:02.802 fio: pid=1171338, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:02.802 12:42:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:02.802 12:42:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:02.802 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=11603968, buflen=4096 00:14:02.802 fio: pid=1171337, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.059 12:42:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.059 12:42:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:03.359 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4726784, buflen=4096 00:14:03.359 fio: pid=1171335, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.617 12:42:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.617 12:42:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:03.617 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=2895872, buflen=4096 00:14:03.617 fio: pid=1171336, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.617 00:14:03.617 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1171335: Tue Apr 16 12:42:02 2024 00:14:03.617 read: IOPS=334, BW=1335KiB/s (1367kB/s)(4616KiB/3457msec) 00:14:03.617 slat (usec): min=5, max=6874, avg=21.05, stdev=246.31 00:14:03.617 clat (usec): min=238, max=41512, avg=2951.93, stdev=10023.99 00:14:03.617 lat (usec): min=244, max=47981, avg=2972.98, stdev=10067.06 00:14:03.617 clat percentiles (usec): 00:14:03.617 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 277], 00:14:03.617 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 314], 00:14:03.617 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 429], 95.00th=[41157], 00:14:03.617 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:14:03.617 | 99.99th=[41681] 00:14:03.617 bw ( KiB/s): min= 96, max= 3992, per=25.46%, avg=1524.00, stdev=1441.78, samples=6 00:14:03.617 iops : min= 24, max= 998, avg=381.00, stdev=360.45, samples=6 00:14:03.617 lat (usec) : 250=3.12%, 500=87.97%, 750=2.16%, 1000=0.09% 00:14:03.617 lat (msec) : 2=0.09%, 50=6.49% 00:14:03.617 cpu : usr=0.03%, sys=0.52%, ctx=1160, majf=0, minf=1 00:14:03.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.617 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.617 issued rwts: total=1155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.617 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1171336: Tue Apr 16 12:42:02 2024 00:14:03.617 read: IOPS=190, BW=761KiB/s (779kB/s)(2828KiB/3716msec) 00:14:03.617 slat (usec): min=4, max=19828, avg=54.91, stdev=830.88 00:14:03.617 clat (usec): min=234, max=45710, avg=5164.83, stdev=13197.57 00:14:03.617 lat (usec): min=239, max=60988, avg=5219.76, stdev=13338.71 00:14:03.617 clat percentiles (usec): 00:14:03.617 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 265], 00:14:03.617 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 330], 00:14:03.617 | 70.00th=[ 379], 80.00th=[ 457], 90.00th=[41157], 95.00th=[41157], 00:14:03.617 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:14:03.617 | 99.99th=[45876] 00:14:03.617 bw ( KiB/s): min= 96, max= 2840, per=13.38%, avg=801.71, stdev=1190.80, samples=7 00:14:03.617 iops : min= 24, max= 710, avg=200.43, stdev=297.70, samples=7 00:14:03.617 lat (usec) : 250=6.64%, 500=80.08%, 750=1.27% 00:14:03.617 lat (msec) : 50=11.86% 00:14:03.617 cpu : usr=0.00%, sys=0.40%, ctx=710, majf=0, minf=1 00:14:03.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.617 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.617 issued rwts: total=708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.617 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1171337: Tue Apr 16 12:42:02 2024 00:14:03.617 read: IOPS=893, BW=3574KiB/s (3659kB/s)(11.1MiB/3171msec) 00:14:03.617 slat (usec): min=4, max=8420, avg=18.10, stdev=202.79 00:14:03.617 clat (usec): min=243, max=42014, avg=1090.00, stdev=5410.23 00:14:03.617 lat (usec): min=249, max=42021, avg=1108.10, stdev=5414.31 00:14:03.617 clat percentiles (usec): 00:14:03.617 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 273], 20.00th=[ 293], 00:14:03.617 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 334], 00:14:03.617 | 70.00th=[ 351], 80.00th=[ 383], 90.00th=[ 562], 95.00th=[ 685], 00:14:03.617 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:14:03.617 | 99.99th=[42206] 00:14:03.617 bw ( KiB/s): min= 96, max= 8008, per=52.49%, avg=3142.67, stdev=3103.38, samples=6 00:14:03.617 iops : min= 24, max= 2002, avg=785.67, stdev=775.84, samples=6 00:14:03.617 lat (usec) : 250=1.24%, 500=87.19%, 750=9.10%, 1000=0.39% 00:14:03.617 lat (msec) : 2=0.21%, 4=0.04%, 50=1.80% 00:14:03.617 cpu : usr=0.35%, sys=1.42%, ctx=2837, majf=0, minf=1 00:14:03.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.617 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.617 issued rwts: total=2834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.617 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1171338: Tue Apr 16 12:42:02 2024 00:14:03.617 read: IOPS=296, BW=1184KiB/s (1212kB/s)(3468KiB/2930msec) 00:14:03.617 slat (nsec): min=4771, max=60424, avg=14650.36, stdev=9424.01 00:14:03.617 clat (usec): min=249, max=41516, avg=3334.55, stdev=10553.47 00:14:03.617 lat (usec): min=256, max=41527, avg=3349.20, stdev=10554.00 00:14:03.617 clat percentiles (usec): 00:14:03.617 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 297], 00:14:03.617 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 359], 00:14:03.617 | 70.00th=[ 383], 80.00th=[ 594], 90.00th=[ 701], 95.00th=[41157], 00:14:03.617 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:03.617 | 99.99th=[41681] 00:14:03.617 bw ( KiB/s): min= 112, max= 2360, per=22.90%, avg=1371.20, stdev=979.17, samples=5 00:14:03.617 iops : min= 28, max= 590, avg=342.80, stdev=244.79, samples=5 00:14:03.617 lat (usec) : 250=0.12%, 500=76.27%, 750=15.67%, 1000=0.58% 00:14:03.617 lat (msec) : 50=7.26% 00:14:03.617 cpu : usr=0.07%, sys=0.58%, ctx=868, majf=0, minf=1 00:14:03.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:03.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.617 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.617 issued rwts: total=868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:03.617 00:14:03.617 Run status group 0 (all jobs): 00:14:03.617 READ: bw=5986KiB/s (6130kB/s), 761KiB/s-3574KiB/s (779kB/s-3659kB/s), io=21.7MiB (22.8MB), run=2930-3716msec 00:14:03.617 00:14:03.617 Disk stats (read/write): 00:14:03.617 nvme0n1: ios=1193/0, merge=0/0, ticks=4310/0, in_queue=4310, util=99.37% 00:14:03.617 nvme0n2: ios=704/0, merge=0/0, ticks=3526/0, in_queue=3526, util=95.79% 00:14:03.617 nvme0n3: ios=2679/0, merge=0/0, ticks=3037/0, in_queue=3037, util=96.35% 00:14:03.617 nvme0n4: ios=865/0, merge=0/0, ticks=2803/0, in_queue=2803, util=96.75% 00:14:03.618 12:42:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.618 12:42:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:03.875 12:42:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.876 12:42:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:04.134 12:42:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.134 12:42:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:04.392 12:42:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.392 12:42:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:04.649 12:42:03 -- target/fio.sh@69 -- # fio_status=0 00:14:04.649 12:42:03 -- target/fio.sh@70 -- # wait 1171247 00:14:04.649 12:42:03 -- target/fio.sh@70 -- # fio_status=4 00:14:04.650 12:42:03 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.908 12:42:03 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:04.908 12:42:03 -- common/autotest_common.sh@1205 -- # local i=0 00:14:04.908 12:42:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:04.908 12:42:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.908 12:42:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:04.908 12:42:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.908 12:42:03 -- common/autotest_common.sh@1217 -- # return 0 00:14:04.908 12:42:03 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:04.908 12:42:03 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:04.908 nvmf hotplug test: fio failed as expected 00:14:04.908 12:42:03 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.166 12:42:04 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:05.166 12:42:04 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:05.166 12:42:04 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:05.166 12:42:04 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:05.166 12:42:04 -- target/fio.sh@91 -- # nvmftestfini 00:14:05.166 12:42:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:05.166 12:42:04 -- nvmf/common.sh@117 -- # sync 00:14:05.166 12:42:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.166 12:42:04 -- nvmf/common.sh@120 -- # set +e 00:14:05.166 12:42:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.166 12:42:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.166 rmmod nvme_tcp 00:14:05.166 rmmod nvme_fabrics 00:14:05.166 rmmod nvme_keyring 00:14:05.166 12:42:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.166 12:42:04 -- nvmf/common.sh@124 -- # set -e 00:14:05.166 12:42:04 -- nvmf/common.sh@125 -- # return 0 00:14:05.166 12:42:04 -- nvmf/common.sh@478 -- # '[' -n 1169215 ']' 00:14:05.166 12:42:04 -- nvmf/common.sh@479 -- # killprocess 1169215 00:14:05.166 12:42:04 -- common/autotest_common.sh@936 -- # '[' -z 1169215 ']' 00:14:05.166 12:42:04 -- common/autotest_common.sh@940 -- # kill -0 1169215 00:14:05.166 12:42:04 -- common/autotest_common.sh@941 -- # uname 00:14:05.166 12:42:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.166 12:42:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1169215 00:14:05.166 12:42:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.166 12:42:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.166 12:42:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1169215' 00:14:05.166 killing process with pid 1169215 00:14:05.166 12:42:04 -- common/autotest_common.sh@955 -- # kill 1169215 00:14:05.166 12:42:04 -- common/autotest_common.sh@960 -- # wait 1169215 00:14:05.425 12:42:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:05.425 12:42:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:05.425 12:42:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:05.425 12:42:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.425 12:42:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.425 12:42:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.425 12:42:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.425 12:42:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.957 12:42:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.957 00:14:07.957 real 0m24.387s 00:14:07.957 user 1m24.899s 00:14:07.957 sys 0m6.313s 00:14:07.957 12:42:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:07.957 12:42:06 -- common/autotest_common.sh@10 -- # set +x 00:14:07.957 ************************************ 00:14:07.957 END TEST nvmf_fio_target 00:14:07.957 ************************************ 00:14:07.957 12:42:06 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:07.957 12:42:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:07.957 12:42:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.957 12:42:06 -- common/autotest_common.sh@10 -- # set +x 00:14:07.957 ************************************ 00:14:07.957 START TEST nvmf_bdevio 00:14:07.957 ************************************ 00:14:07.957 12:42:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:07.957 * Looking for test storage... 00:14:07.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.957 12:42:06 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.957 12:42:06 -- nvmf/common.sh@7 -- # uname -s 00:14:07.957 12:42:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.957 12:42:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.957 12:42:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.957 12:42:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.957 12:42:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.957 12:42:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.957 12:42:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.957 12:42:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.957 12:42:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.957 12:42:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.957 12:42:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:07.957 12:42:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:07.957 12:42:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.957 12:42:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.957 12:42:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.957 12:42:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.957 12:42:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.957 12:42:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.957 12:42:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.957 12:42:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.957 12:42:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.957 12:42:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.957 12:42:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.957 12:42:06 -- paths/export.sh@5 -- # export PATH 00:14:07.957 12:42:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.957 12:42:06 -- nvmf/common.sh@47 -- # : 0 00:14:07.957 12:42:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.957 12:42:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.957 12:42:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.957 12:42:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.957 12:42:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.957 12:42:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.957 12:42:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.957 12:42:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.957 12:42:06 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.957 12:42:06 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.957 12:42:06 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:07.957 12:42:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:07.957 12:42:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.957 12:42:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:07.957 12:42:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:07.957 12:42:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:07.957 12:42:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.957 12:42:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.957 12:42:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.957 12:42:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:07.957 12:42:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:07.957 12:42:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.957 12:42:06 -- common/autotest_common.sh@10 -- # set +x 00:14:10.486 12:42:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:10.486 12:42:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.486 12:42:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.486 12:42:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.486 12:42:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.486 12:42:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.486 12:42:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.486 12:42:09 -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.486 12:42:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.486 12:42:09 -- nvmf/common.sh@296 -- # e810=() 00:14:10.486 12:42:09 -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.486 12:42:09 -- nvmf/common.sh@297 -- # x722=() 00:14:10.486 12:42:09 -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.486 12:42:09 -- nvmf/common.sh@298 -- # mlx=() 00:14:10.486 12:42:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.486 12:42:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.486 12:42:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.486 12:42:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.486 12:42:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.486 12:42:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.486 12:42:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:14:10.486 Found 0000:82:00.0 (0x8086 - 0x159b) 00:14:10.486 12:42:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.486 12:42:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:14:10.486 Found 0000:82:00.1 (0x8086 - 0x159b) 00:14:10.486 12:42:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.486 12:42:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.486 12:42:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.486 12:42:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.486 12:42:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:10.486 12:42:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.486 12:42:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:14:10.486 Found net devices under 0000:82:00.0: cvl_0_0 00:14:10.486 12:42:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.486 12:42:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.487 12:42:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.487 12:42:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:10.487 12:42:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.487 12:42:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:14:10.487 Found net devices under 0000:82:00.1: cvl_0_1 00:14:10.487 12:42:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.487 12:42:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:10.487 12:42:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:10.487 12:42:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:10.487 12:42:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:10.487 12:42:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:10.487 12:42:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.487 12:42:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.487 12:42:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.487 12:42:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.487 12:42:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.487 12:42:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.487 12:42:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.487 12:42:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.487 12:42:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.487 12:42:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.487 12:42:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.487 12:42:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.487 12:42:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.487 12:42:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.487 12:42:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.487 12:42:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.487 12:42:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.487 12:42:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.487 12:42:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.487 12:42:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:10.487 00:14:10.487 --- 10.0.0.2 ping statistics --- 00:14:10.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.487 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:10.487 12:42:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:14:10.487 00:14:10.487 --- 10.0.0.1 ping statistics --- 00:14:10.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.487 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:14:10.487 12:42:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.487 12:42:09 -- nvmf/common.sh@411 -- # return 0 00:14:10.487 12:42:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:10.487 12:42:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.487 12:42:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:10.487 12:42:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:10.487 12:42:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.487 12:42:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:10.487 12:42:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:10.487 12:42:09 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:10.487 12:42:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:10.487 12:42:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:10.487 12:42:09 -- common/autotest_common.sh@10 -- # set +x 00:14:10.487 12:42:09 -- nvmf/common.sh@470 -- # nvmfpid=1174906 00:14:10.487 12:42:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:10.487 12:42:09 -- nvmf/common.sh@471 -- # waitforlisten 1174906 00:14:10.487 12:42:09 -- common/autotest_common.sh@817 -- # '[' -z 1174906 ']' 00:14:10.487 12:42:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.487 12:42:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:10.487 12:42:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.487 12:42:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:10.487 12:42:09 -- common/autotest_common.sh@10 -- # set +x 00:14:10.487 [2024-04-16 12:42:09.296979] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:10.487 [2024-04-16 12:42:09.297048] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.487 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.487 [2024-04-16 12:42:09.382137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.487 [2024-04-16 12:42:09.503293] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.487 [2024-04-16 12:42:09.503353] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.487 [2024-04-16 12:42:09.503370] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.487 [2024-04-16 12:42:09.503385] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.487 [2024-04-16 12:42:09.503397] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.487 [2024-04-16 12:42:09.503494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:10.487 [2024-04-16 12:42:09.503549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:10.487 [2024-04-16 12:42:09.503593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:10.487 [2024-04-16 12:42:09.503597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.421 12:42:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:11.421 12:42:10 -- common/autotest_common.sh@850 -- # return 0 00:14:11.421 12:42:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:11.421 12:42:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:11.421 12:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:11.421 12:42:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.421 12:42:10 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.421 12:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.421 12:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:11.421 [2024-04-16 12:42:10.294418] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.421 12:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.421 12:42:10 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:11.421 12:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.421 12:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:11.421 Malloc0 00:14:11.421 12:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.421 12:42:10 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:11.421 12:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.421 12:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:11.421 12:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.421 12:42:10 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.421 12:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.421 12:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:11.421 12:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.421 12:42:10 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.421 12:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.421 12:42:10 -- common/autotest_common.sh@10 -- # set +x 00:14:11.421 [2024-04-16 12:42:10.348258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.421 12:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.421 12:42:10 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:11.421 12:42:10 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:11.421 12:42:10 -- nvmf/common.sh@521 -- # config=() 00:14:11.421 12:42:10 -- nvmf/common.sh@521 -- # local subsystem config 00:14:11.421 12:42:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:11.421 12:42:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:11.421 { 00:14:11.421 "params": { 00:14:11.421 "name": "Nvme$subsystem", 00:14:11.421 "trtype": "$TEST_TRANSPORT", 00:14:11.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.421 "adrfam": "ipv4", 00:14:11.421 "trsvcid": "$NVMF_PORT", 00:14:11.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.421 "hdgst": ${hdgst:-false}, 00:14:11.421 "ddgst": ${ddgst:-false} 00:14:11.421 }, 00:14:11.421 "method": "bdev_nvme_attach_controller" 00:14:11.421 } 00:14:11.421 EOF 00:14:11.421 )") 00:14:11.421 12:42:10 -- nvmf/common.sh@543 -- # cat 00:14:11.421 12:42:10 -- nvmf/common.sh@545 -- # jq . 00:14:11.421 12:42:10 -- nvmf/common.sh@546 -- # IFS=, 00:14:11.421 12:42:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:11.421 "params": { 00:14:11.421 "name": "Nvme1", 00:14:11.421 "trtype": "tcp", 00:14:11.421 "traddr": "10.0.0.2", 00:14:11.421 "adrfam": "ipv4", 00:14:11.421 "trsvcid": "4420", 00:14:11.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.421 "hdgst": false, 00:14:11.421 "ddgst": false 00:14:11.421 }, 00:14:11.421 "method": "bdev_nvme_attach_controller" 00:14:11.421 }' 00:14:11.421 [2024-04-16 12:42:10.394348] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:11.421 [2024-04-16 12:42:10.394415] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175152 ] 00:14:11.421 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.421 [2024-04-16 12:42:10.464884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.679 [2024-04-16 12:42:10.580284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.679 [2024-04-16 12:42:10.580332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.679 [2024-04-16 12:42:10.580335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.679 [2024-04-16 12:42:10.589342] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:14:11.937 I/O targets: 00:14:11.937 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:11.937 00:14:11.937 00:14:11.937 CUnit - A unit testing framework for C - Version 2.1-3 00:14:11.937 http://cunit.sourceforge.net/ 00:14:11.937 00:14:11.937 00:14:11.937 Suite: bdevio tests on: Nvme1n1 00:14:11.937 Test: blockdev write read block ...passed 00:14:12.195 Test: blockdev write zeroes read block ...passed 00:14:12.195 Test: blockdev write zeroes read no split ...passed 00:14:12.195 Test: blockdev write zeroes read split ...passed 00:14:12.195 Test: blockdev write zeroes read split partial ...passed 00:14:12.195 Test: blockdev reset ...[2024-04-16 12:42:11.082166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:12.195 [2024-04-16 12:42:11.082304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a8ca0 (9): Bad file descriptor 00:14:12.195 [2024-04-16 12:42:11.140647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:12.195 passed 00:14:12.195 Test: blockdev write read 8 blocks ...passed 00:14:12.195 Test: blockdev write read size > 128k ...passed 00:14:12.195 Test: blockdev write read invalid size ...passed 00:14:12.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:12.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:12.195 Test: blockdev write read max offset ...passed 00:14:12.453 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:12.453 Test: blockdev writev readv 8 blocks ...passed 00:14:12.453 Test: blockdev writev readv 30 x 1block ...passed 00:14:12.453 Test: blockdev writev readv block ...passed 00:14:12.453 Test: blockdev writev readv size > 128k ...passed 00:14:12.453 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:12.453 Test: blockdev comparev and writev ...[2024-04-16 12:42:11.355258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.453 [2024-04-16 12:42:11.355293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.355318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.453 [2024-04-16 12:42:11.355335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.355843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.453 [2024-04-16 12:42:11.355868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.355890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.453 [2024-04-16 12:42:11.355906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.356415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.453 [2024-04-16 12:42:11.356439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.356461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.453 [2024-04-16 12:42:11.356477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.357046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.453 [2024-04-16 12:42:11.357070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.357092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.453 [2024-04-16 12:42:11.357108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:12.453 passed 00:14:12.453 Test: blockdev nvme passthru rw ...passed 00:14:12.453 Test: blockdev nvme passthru vendor specific ...[2024-04-16 12:42:11.439015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.453 [2024-04-16 12:42:11.439042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.439220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.453 [2024-04-16 12:42:11.439242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.439415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.453 [2024-04-16 12:42:11.439437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:12.453 [2024-04-16 12:42:11.439629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.453 [2024-04-16 12:42:11.439651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:12.453 passed 00:14:12.453 Test: blockdev nvme admin passthru ...passed 00:14:12.453 Test: blockdev copy ...passed 00:14:12.453 00:14:12.453 Run Summary: Type Total Ran Passed Failed Inactive 00:14:12.453 suites 1 1 n/a 0 0 00:14:12.453 tests 23 23 23 0 0 00:14:12.453 asserts 152 152 152 0 n/a 00:14:12.453 00:14:12.453 Elapsed time = 1.150 seconds 00:14:12.711 12:42:11 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.711 12:42:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.711 12:42:11 -- common/autotest_common.sh@10 -- # set +x 00:14:12.711 12:42:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.711 12:42:11 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:12.711 12:42:11 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:12.711 12:42:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:12.711 12:42:11 -- nvmf/common.sh@117 -- # sync 00:14:12.711 12:42:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.711 12:42:11 -- nvmf/common.sh@120 -- # set +e 00:14:12.711 12:42:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.711 12:42:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.712 rmmod nvme_tcp 00:14:12.712 rmmod nvme_fabrics 00:14:12.712 rmmod nvme_keyring 00:14:12.994 12:42:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.994 12:42:11 -- nvmf/common.sh@124 -- # set -e 00:14:12.994 12:42:11 -- nvmf/common.sh@125 -- # return 0 00:14:12.994 12:42:11 -- nvmf/common.sh@478 -- # '[' -n 1174906 ']' 00:14:12.994 12:42:11 -- nvmf/common.sh@479 -- # killprocess 1174906 00:14:12.994 12:42:11 -- common/autotest_common.sh@936 -- # '[' -z 1174906 ']' 00:14:12.994 12:42:11 -- common/autotest_common.sh@940 -- # kill -0 1174906 00:14:12.994 12:42:11 -- common/autotest_common.sh@941 -- # uname 00:14:12.994 12:42:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:12.994 12:42:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1174906 00:14:12.994 12:42:11 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:12.994 12:42:11 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:12.994 12:42:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1174906' 00:14:12.994 killing process with pid 1174906 00:14:12.994 12:42:11 -- common/autotest_common.sh@955 -- # kill 1174906 00:14:12.994 12:42:11 -- common/autotest_common.sh@960 -- # wait 1174906 00:14:13.258 12:42:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:13.258 12:42:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:13.259 12:42:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:13.259 12:42:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.259 12:42:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.259 12:42:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.259 12:42:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.259 12:42:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.160 12:42:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.160 00:14:15.160 real 0m7.584s 00:14:15.160 user 0m14.011s 00:14:15.160 sys 0m2.402s 00:14:15.160 12:42:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:15.160 12:42:14 -- common/autotest_common.sh@10 -- # set +x 00:14:15.160 ************************************ 00:14:15.160 END TEST nvmf_bdevio 00:14:15.160 ************************************ 00:14:15.160 12:42:14 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:14:15.160 12:42:14 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:15.160 12:42:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:15.160 12:42:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.160 12:42:14 -- common/autotest_common.sh@10 -- # set +x 00:14:15.418 ************************************ 00:14:15.418 START TEST nvmf_bdevio_no_huge 00:14:15.418 ************************************ 00:14:15.418 12:42:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:15.418 * Looking for test storage... 00:14:15.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.418 12:42:14 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.418 12:42:14 -- nvmf/common.sh@7 -- # uname -s 00:14:15.418 12:42:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.418 12:42:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.418 12:42:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.418 12:42:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.418 12:42:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.418 12:42:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.418 12:42:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.418 12:42:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.418 12:42:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.418 12:42:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.419 12:42:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:15.419 12:42:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:15.419 12:42:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.419 12:42:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.419 12:42:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.419 12:42:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.419 12:42:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.419 12:42:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.419 12:42:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.419 12:42:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.419 12:42:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.419 12:42:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.419 12:42:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.419 12:42:14 -- paths/export.sh@5 -- # export PATH 00:14:15.419 12:42:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.419 12:42:14 -- nvmf/common.sh@47 -- # : 0 00:14:15.419 12:42:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.419 12:42:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.419 12:42:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.419 12:42:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.419 12:42:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.419 12:42:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.419 12:42:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.419 12:42:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.419 12:42:14 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.419 12:42:14 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.419 12:42:14 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:15.419 12:42:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:15.419 12:42:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.419 12:42:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:15.419 12:42:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:15.419 12:42:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:15.419 12:42:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.419 12:42:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.419 12:42:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.419 12:42:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:15.419 12:42:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:15.419 12:42:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.419 12:42:14 -- common/autotest_common.sh@10 -- # set +x 00:14:17.947 12:42:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:17.947 12:42:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.947 12:42:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.947 12:42:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.947 12:42:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.947 12:42:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.947 12:42:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.947 12:42:16 -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.947 12:42:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.947 12:42:16 -- nvmf/common.sh@296 -- # e810=() 00:14:17.947 12:42:16 -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.947 12:42:16 -- nvmf/common.sh@297 -- # x722=() 00:14:17.947 12:42:16 -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.947 12:42:16 -- nvmf/common.sh@298 -- # mlx=() 00:14:17.947 12:42:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.947 12:42:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.947 12:42:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.947 12:42:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.947 12:42:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.947 12:42:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.947 12:42:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:14:17.947 Found 0000:82:00.0 (0x8086 - 0x159b) 00:14:17.947 12:42:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.947 12:42:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:14:17.947 Found 0000:82:00.1 (0x8086 - 0x159b) 00:14:17.947 12:42:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.947 12:42:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.947 12:42:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.947 12:42:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:17.947 12:42:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.947 12:42:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:14:17.947 Found net devices under 0000:82:00.0: cvl_0_0 00:14:17.947 12:42:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.947 12:42:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.947 12:42:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.947 12:42:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:17.947 12:42:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.947 12:42:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:14:17.947 Found net devices under 0000:82:00.1: cvl_0_1 00:14:17.947 12:42:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.947 12:42:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:17.947 12:42:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:17.947 12:42:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:17.947 12:42:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:17.947 12:42:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.947 12:42:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.947 12:42:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.947 12:42:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.947 12:42:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.947 12:42:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.947 12:42:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.947 12:42:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.947 12:42:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.947 12:42:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.947 12:42:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.947 12:42:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.948 12:42:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.948 12:42:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.948 12:42:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.948 12:42:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.948 12:42:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.948 12:42:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.948 12:42:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.948 12:42:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:14:17.948 00:14:17.948 --- 10.0.0.2 ping statistics --- 00:14:17.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.948 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:14:17.948 12:42:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:14:17.948 00:14:17.948 --- 10.0.0.1 ping statistics --- 00:14:17.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.948 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:14:17.948 12:42:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.948 12:42:16 -- nvmf/common.sh@411 -- # return 0 00:14:17.948 12:42:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:17.948 12:42:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.948 12:42:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:17.948 12:42:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:17.948 12:42:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.948 12:42:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:17.948 12:42:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:17.948 12:42:16 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:17.948 12:42:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:17.948 12:42:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:17.948 12:42:16 -- common/autotest_common.sh@10 -- # set +x 00:14:17.948 12:42:16 -- nvmf/common.sh@470 -- # nvmfpid=1177636 00:14:17.948 12:42:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:17.948 12:42:16 -- nvmf/common.sh@471 -- # waitforlisten 1177636 00:14:17.948 12:42:16 -- common/autotest_common.sh@817 -- # '[' -z 1177636 ']' 00:14:17.948 12:42:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.948 12:42:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:17.948 12:42:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.948 12:42:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:17.948 12:42:16 -- common/autotest_common.sh@10 -- # set +x 00:14:18.206 [2024-04-16 12:42:17.029992] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:18.206 [2024-04-16 12:42:17.030073] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:18.206 [2024-04-16 12:42:17.115006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.206 [2024-04-16 12:42:17.236070] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.206 [2024-04-16 12:42:17.236144] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.206 [2024-04-16 12:42:17.236162] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.206 [2024-04-16 12:42:17.236176] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.206 [2024-04-16 12:42:17.236188] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.206 [2024-04-16 12:42:17.236284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:18.206 [2024-04-16 12:42:17.236348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:18.206 [2024-04-16 12:42:17.236397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:18.206 [2024-04-16 12:42:17.236400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.139 12:42:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:19.139 12:42:17 -- common/autotest_common.sh@850 -- # return 0 00:14:19.139 12:42:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:19.139 12:42:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:19.139 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.139 12:42:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.139 12:42:17 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.139 12:42:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.139 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.139 [2024-04-16 12:42:17.989387] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.139 12:42:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.139 12:42:17 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:19.139 12:42:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.139 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.139 Malloc0 00:14:19.139 12:42:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.140 12:42:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:19.140 12:42:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.140 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:14:19.140 12:42:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.140 12:42:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:19.140 12:42:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.140 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:14:19.140 12:42:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.140 12:42:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.140 12:42:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:19.140 12:42:18 -- common/autotest_common.sh@10 -- # set +x 00:14:19.140 [2024-04-16 12:42:18.027693] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.140 12:42:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:19.140 12:42:18 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:19.140 12:42:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:19.140 12:42:18 -- nvmf/common.sh@521 -- # config=() 00:14:19.140 12:42:18 -- nvmf/common.sh@521 -- # local subsystem config 00:14:19.140 12:42:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:19.140 12:42:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:19.140 { 00:14:19.140 "params": { 00:14:19.140 "name": "Nvme$subsystem", 00:14:19.140 "trtype": "$TEST_TRANSPORT", 00:14:19.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.140 "adrfam": "ipv4", 00:14:19.140 "trsvcid": "$NVMF_PORT", 00:14:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.140 "hdgst": ${hdgst:-false}, 00:14:19.140 "ddgst": ${ddgst:-false} 00:14:19.140 }, 00:14:19.140 "method": "bdev_nvme_attach_controller" 00:14:19.140 } 00:14:19.140 EOF 00:14:19.140 )") 00:14:19.140 12:42:18 -- nvmf/common.sh@543 -- # cat 00:14:19.140 12:42:18 -- nvmf/common.sh@545 -- # jq . 00:14:19.140 12:42:18 -- nvmf/common.sh@546 -- # IFS=, 00:14:19.140 12:42:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:19.140 "params": { 00:14:19.140 "name": "Nvme1", 00:14:19.140 "trtype": "tcp", 00:14:19.140 "traddr": "10.0.0.2", 00:14:19.140 "adrfam": "ipv4", 00:14:19.140 "trsvcid": "4420", 00:14:19.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.140 "hdgst": false, 00:14:19.140 "ddgst": false 00:14:19.140 }, 00:14:19.140 "method": "bdev_nvme_attach_controller" 00:14:19.140 }' 00:14:19.140 [2024-04-16 12:42:18.070424] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:19.140 [2024-04-16 12:42:18.070494] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1177792 ] 00:14:19.140 [2024-04-16 12:42:18.144213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.397 [2024-04-16 12:42:18.259002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.397 [2024-04-16 12:42:18.259051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.397 [2024-04-16 12:42:18.259054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.397 [2024-04-16 12:42:18.268064] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:14:19.655 I/O targets: 00:14:19.655 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:19.655 00:14:19.655 00:14:19.655 CUnit - A unit testing framework for C - Version 2.1-3 00:14:19.655 http://cunit.sourceforge.net/ 00:14:19.655 00:14:19.655 00:14:19.655 Suite: bdevio tests on: Nvme1n1 00:14:19.655 Test: blockdev write read block ...passed 00:14:19.655 Test: blockdev write zeroes read block ...passed 00:14:19.655 Test: blockdev write zeroes read no split ...passed 00:14:19.655 Test: blockdev write zeroes read split ...passed 00:14:19.913 Test: blockdev write zeroes read split partial ...passed 00:14:19.913 Test: blockdev reset ...[2024-04-16 12:42:18.760045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:19.913 [2024-04-16 12:42:18.760160] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fbce0 (9): Bad file descriptor 00:14:19.913 [2024-04-16 12:42:18.772221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:19.913 passed 00:14:19.913 Test: blockdev write read 8 blocks ...passed 00:14:19.913 Test: blockdev write read size > 128k ...passed 00:14:19.913 Test: blockdev write read invalid size ...passed 00:14:19.913 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.913 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.913 Test: blockdev write read max offset ...passed 00:14:19.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.913 Test: blockdev writev readv 8 blocks ...passed 00:14:19.913 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.913 Test: blockdev writev readv block ...passed 00:14:20.172 Test: blockdev writev readv size > 128k ...passed 00:14:20.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:20.172 Test: blockdev comparev and writev ...[2024-04-16 12:42:18.986529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.172 [2024-04-16 12:42:18.986582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:18.986608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.172 [2024-04-16 12:42:18.986625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:18.987083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.172 [2024-04-16 12:42:18.987107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:18.987129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.172 [2024-04-16 12:42:18.987145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:18.987634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.172 [2024-04-16 12:42:18.987657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:18.987679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.172 [2024-04-16 12:42:18.987694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:18.988179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.172 [2024-04-16 12:42:18.988203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:18.988224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:20.172 [2024-04-16 12:42:18.988240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:20.172 passed 00:14:20.172 Test: blockdev nvme passthru rw ...passed 00:14:20.172 Test: blockdev nvme passthru vendor specific ...[2024-04-16 12:42:19.070909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.172 [2024-04-16 12:42:19.070937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:19.071114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.172 [2024-04-16 12:42:19.071136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:19.071308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.172 [2024-04-16 12:42:19.071330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:20.172 [2024-04-16 12:42:19.071507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:20.172 [2024-04-16 12:42:19.071529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:20.172 passed 00:14:20.172 Test: blockdev nvme admin passthru ...passed 00:14:20.172 Test: blockdev copy ...passed 00:14:20.172 00:14:20.172 Run Summary: Type Total Ran Passed Failed Inactive 00:14:20.172 suites 1 1 n/a 0 0 00:14:20.172 tests 23 23 23 0 0 00:14:20.172 asserts 152 152 152 0 n/a 00:14:20.172 00:14:20.172 Elapsed time = 1.175 seconds 00:14:20.738 12:42:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.738 12:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.738 12:42:19 -- common/autotest_common.sh@10 -- # set +x 00:14:20.738 12:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.738 12:42:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:20.738 12:42:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:20.738 12:42:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:20.738 12:42:19 -- nvmf/common.sh@117 -- # sync 00:14:20.738 12:42:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.738 12:42:19 -- nvmf/common.sh@120 -- # set +e 00:14:20.738 12:42:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.738 12:42:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.738 rmmod nvme_tcp 00:14:20.738 rmmod nvme_fabrics 00:14:20.738 rmmod nvme_keyring 00:14:20.738 12:42:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.738 12:42:19 -- nvmf/common.sh@124 -- # set -e 00:14:20.738 12:42:19 -- nvmf/common.sh@125 -- # return 0 00:14:20.738 12:42:19 -- nvmf/common.sh@478 -- # '[' -n 1177636 ']' 00:14:20.738 12:42:19 -- nvmf/common.sh@479 -- # killprocess 1177636 00:14:20.738 12:42:19 -- common/autotest_common.sh@936 -- # '[' -z 1177636 ']' 00:14:20.738 12:42:19 -- common/autotest_common.sh@940 -- # kill -0 1177636 00:14:20.738 12:42:19 -- common/autotest_common.sh@941 -- # uname 00:14:20.738 12:42:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.738 12:42:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1177636 00:14:20.738 12:42:19 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:20.738 12:42:19 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:20.738 12:42:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1177636' 00:14:20.738 killing process with pid 1177636 00:14:20.738 12:42:19 -- common/autotest_common.sh@955 -- # kill 1177636 00:14:20.738 12:42:19 -- common/autotest_common.sh@960 -- # wait 1177636 00:14:20.996 12:42:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:20.996 12:42:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:20.996 12:42:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:20.996 12:42:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.996 12:42:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.996 12:42:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.996 12:42:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.996 12:42:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.531 12:42:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.531 00:14:23.531 real 0m7.802s 00:14:23.531 user 0m14.257s 00:14:23.531 sys 0m2.928s 00:14:23.531 12:42:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.531 12:42:22 -- common/autotest_common.sh@10 -- # set +x 00:14:23.531 ************************************ 00:14:23.531 END TEST nvmf_bdevio_no_huge 00:14:23.531 ************************************ 00:14:23.531 12:42:22 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:23.531 12:42:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:23.531 12:42:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.531 12:42:22 -- common/autotest_common.sh@10 -- # set +x 00:14:23.531 ************************************ 00:14:23.531 START TEST nvmf_tls 00:14:23.531 ************************************ 00:14:23.531 12:42:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:23.531 * Looking for test storage... 00:14:23.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.531 12:42:22 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.531 12:42:22 -- nvmf/common.sh@7 -- # uname -s 00:14:23.531 12:42:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.531 12:42:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.531 12:42:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.531 12:42:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.531 12:42:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.531 12:42:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.531 12:42:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.531 12:42:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.531 12:42:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.531 12:42:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.531 12:42:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:23.531 12:42:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:23.531 12:42:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.531 12:42:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.531 12:42:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.531 12:42:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.531 12:42:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.531 12:42:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.531 12:42:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.531 12:42:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.531 12:42:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.531 12:42:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.531 12:42:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.531 12:42:22 -- paths/export.sh@5 -- # export PATH 00:14:23.531 12:42:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.531 12:42:22 -- nvmf/common.sh@47 -- # : 0 00:14:23.531 12:42:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.531 12:42:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.531 12:42:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.531 12:42:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.531 12:42:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.531 12:42:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.531 12:42:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.531 12:42:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.531 12:42:22 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.531 12:42:22 -- target/tls.sh@62 -- # nvmftestinit 00:14:23.531 12:42:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:23.531 12:42:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.531 12:42:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:23.531 12:42:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:23.531 12:42:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:23.531 12:42:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.531 12:42:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.531 12:42:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.532 12:42:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:23.532 12:42:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:23.532 12:42:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.532 12:42:22 -- common/autotest_common.sh@10 -- # set +x 00:14:26.067 12:42:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:26.067 12:42:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.067 12:42:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.067 12:42:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.067 12:42:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.067 12:42:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.067 12:42:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.067 12:42:24 -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.067 12:42:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.067 12:42:24 -- nvmf/common.sh@296 -- # e810=() 00:14:26.067 12:42:24 -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.067 12:42:24 -- nvmf/common.sh@297 -- # x722=() 00:14:26.067 12:42:24 -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.067 12:42:24 -- nvmf/common.sh@298 -- # mlx=() 00:14:26.067 12:42:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.067 12:42:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.067 12:42:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.067 12:42:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.067 12:42:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.067 12:42:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.067 12:42:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:14:26.067 Found 0000:82:00.0 (0x8086 - 0x159b) 00:14:26.067 12:42:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.067 12:42:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:14:26.067 Found 0000:82:00.1 (0x8086 - 0x159b) 00:14:26.067 12:42:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.067 12:42:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.067 12:42:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.067 12:42:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.067 12:42:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:26.067 12:42:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.067 12:42:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:14:26.067 Found net devices under 0000:82:00.0: cvl_0_0 00:14:26.067 12:42:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.067 12:42:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.067 12:42:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.067 12:42:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:26.067 12:42:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.067 12:42:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:14:26.067 Found net devices under 0000:82:00.1: cvl_0_1 00:14:26.067 12:42:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.068 12:42:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:26.068 12:42:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:26.068 12:42:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:26.068 12:42:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:26.068 12:42:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:26.068 12:42:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.068 12:42:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.068 12:42:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.068 12:42:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.068 12:42:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.068 12:42:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.068 12:42:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.068 12:42:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.068 12:42:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.068 12:42:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.068 12:42:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.068 12:42:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.068 12:42:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.068 12:42:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.068 12:42:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.068 12:42:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.068 12:42:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.068 12:42:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.068 12:42:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.068 12:42:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:14:26.068 00:14:26.068 --- 10.0.0.2 ping statistics --- 00:14:26.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.068 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:14:26.068 12:42:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:14:26.068 00:14:26.068 --- 10.0.0.1 ping statistics --- 00:14:26.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.068 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:14:26.068 12:42:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.068 12:42:24 -- nvmf/common.sh@411 -- # return 0 00:14:26.068 12:42:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:26.068 12:42:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.068 12:42:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:26.068 12:42:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:26.068 12:42:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.068 12:42:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:26.068 12:42:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:26.068 12:42:24 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:26.068 12:42:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:26.068 12:42:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:26.068 12:42:24 -- common/autotest_common.sh@10 -- # set +x 00:14:26.068 12:42:24 -- nvmf/common.sh@470 -- # nvmfpid=1180288 00:14:26.068 12:42:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:26.068 12:42:24 -- nvmf/common.sh@471 -- # waitforlisten 1180288 00:14:26.068 12:42:24 -- common/autotest_common.sh@817 -- # '[' -z 1180288 ']' 00:14:26.068 12:42:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.068 12:42:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:26.068 12:42:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.068 12:42:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:26.068 12:42:24 -- common/autotest_common.sh@10 -- # set +x 00:14:26.068 [2024-04-16 12:42:25.012270] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:26.068 [2024-04-16 12:42:25.012340] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.068 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.068 [2024-04-16 12:42:25.091362] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.326 [2024-04-16 12:42:25.207511] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.326 [2024-04-16 12:42:25.207598] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.326 [2024-04-16 12:42:25.207615] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.326 [2024-04-16 12:42:25.207627] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.326 [2024-04-16 12:42:25.207637] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.326 [2024-04-16 12:42:25.207665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.326 12:42:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:26.326 12:42:25 -- common/autotest_common.sh@850 -- # return 0 00:14:26.326 12:42:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:26.326 12:42:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:26.326 12:42:25 -- common/autotest_common.sh@10 -- # set +x 00:14:26.326 12:42:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.326 12:42:25 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:26.326 12:42:25 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:26.583 true 00:14:26.583 12:42:25 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:26.583 12:42:25 -- target/tls.sh@73 -- # jq -r .tls_version 00:14:26.841 12:42:25 -- target/tls.sh@73 -- # version=0 00:14:26.841 12:42:25 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:26.841 12:42:25 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:27.099 12:42:25 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:27.099 12:42:25 -- target/tls.sh@81 -- # jq -r .tls_version 00:14:27.357 12:42:26 -- target/tls.sh@81 -- # version=13 00:14:27.357 12:42:26 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:27.357 12:42:26 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:27.614 12:42:26 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:27.614 12:42:26 -- target/tls.sh@89 -- # jq -r .tls_version 00:14:27.872 12:42:26 -- target/tls.sh@89 -- # version=7 00:14:27.872 12:42:26 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:27.872 12:42:26 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:27.872 12:42:26 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:28.130 12:42:26 -- target/tls.sh@96 -- # ktls=false 00:14:28.130 12:42:26 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:28.130 12:42:26 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:28.388 12:42:27 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:28.388 12:42:27 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:28.388 12:42:27 -- target/tls.sh@104 -- # ktls=true 00:14:28.388 12:42:27 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:28.388 12:42:27 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:28.646 12:42:27 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:28.646 12:42:27 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:28.904 12:42:27 -- target/tls.sh@112 -- # ktls=false 00:14:28.904 12:42:27 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:28.904 12:42:27 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:28.904 12:42:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:28.904 12:42:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:28.904 12:42:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:28.904 12:42:27 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:14:28.904 12:42:27 -- nvmf/common.sh@693 -- # digest=1 00:14:28.904 12:42:27 -- nvmf/common.sh@694 -- # python - 00:14:28.904 12:42:27 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:28.904 12:42:27 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:28.904 12:42:27 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:28.904 12:42:27 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:28.904 12:42:27 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:28.904 12:42:27 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:14:28.904 12:42:27 -- nvmf/common.sh@693 -- # digest=1 00:14:28.904 12:42:27 -- nvmf/common.sh@694 -- # python - 00:14:29.162 12:42:28 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:29.162 12:42:28 -- target/tls.sh@121 -- # mktemp 00:14:29.162 12:42:28 -- target/tls.sh@121 -- # key_path=/tmp/tmp.T8CamQrp4N 00:14:29.162 12:42:28 -- target/tls.sh@122 -- # mktemp 00:14:29.162 12:42:28 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.X0Tnjdad5p 00:14:29.162 12:42:28 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:29.162 12:42:28 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:29.162 12:42:28 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.T8CamQrp4N 00:14:29.162 12:42:28 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.X0Tnjdad5p 00:14:29.162 12:42:28 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:29.420 12:42:28 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:14:29.678 12:42:28 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.T8CamQrp4N 00:14:29.678 12:42:28 -- target/tls.sh@49 -- # local key=/tmp/tmp.T8CamQrp4N 00:14:29.678 12:42:28 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:29.935 [2024-04-16 12:42:28.890656] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.935 12:42:28 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:30.194 12:42:29 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:30.452 [2024-04-16 12:42:29.432138] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:30.452 [2024-04-16 12:42:29.432383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.452 12:42:29 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:30.710 malloc0 00:14:30.710 12:42:29 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:30.968 12:42:29 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8CamQrp4N 00:14:31.226 [2024-04-16 12:42:30.173478] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:31.226 12:42:30 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.T8CamQrp4N 00:14:31.226 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.460 Initializing NVMe Controllers 00:14:43.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:43.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:43.460 Initialization complete. Launching workers. 00:14:43.460 ======================================================== 00:14:43.460 Latency(us) 00:14:43.460 Device Information : IOPS MiB/s Average min max 00:14:43.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7583.89 29.62 8441.85 1273.08 9445.95 00:14:43.460 ======================================================== 00:14:43.460 Total : 7583.89 29.62 8441.85 1273.08 9445.95 00:14:43.460 00:14:43.460 12:42:40 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T8CamQrp4N 00:14:43.460 12:42:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:43.460 12:42:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:43.460 12:42:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:43.460 12:42:40 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.T8CamQrp4N' 00:14:43.460 12:42:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.460 12:42:40 -- target/tls.sh@28 -- # bdevperf_pid=1182065 00:14:43.460 12:42:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:43.460 12:42:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:43.460 12:42:40 -- target/tls.sh@31 -- # waitforlisten 1182065 /var/tmp/bdevperf.sock 00:14:43.460 12:42:40 -- common/autotest_common.sh@817 -- # '[' -z 1182065 ']' 00:14:43.460 12:42:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.460 12:42:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.460 12:42:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.460 12:42:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.460 12:42:40 -- common/autotest_common.sh@10 -- # set +x 00:14:43.460 [2024-04-16 12:42:40.348152] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:43.460 [2024-04-16 12:42:40.348226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182065 ] 00:14:43.460 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.460 [2024-04-16 12:42:40.414679] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.460 [2024-04-16 12:42:40.519266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.460 12:42:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.460 12:42:40 -- common/autotest_common.sh@850 -- # return 0 00:14:43.460 12:42:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8CamQrp4N 00:14:43.460 [2024-04-16 12:42:40.850737] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.460 [2024-04-16 12:42:40.850849] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:43.460 TLSTESTn1 00:14:43.460 12:42:40 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:43.460 Running I/O for 10 seconds... 00:14:53.438 00:14:53.438 Latency(us) 00:14:53.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.438 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:53.438 Verification LBA range: start 0x0 length 0x2000 00:14:53.438 TLSTESTn1 : 10.04 3143.42 12.28 0.00 0.00 40614.43 10534.31 100197.26 00:14:53.438 =================================================================================================================== 00:14:53.438 Total : 3143.42 12.28 0.00 0.00 40614.43 10534.31 100197.26 00:14:53.438 0 00:14:53.438 12:42:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.438 12:42:51 -- target/tls.sh@45 -- # killprocess 1182065 00:14:53.438 12:42:51 -- common/autotest_common.sh@936 -- # '[' -z 1182065 ']' 00:14:53.438 12:42:51 -- common/autotest_common.sh@940 -- # kill -0 1182065 00:14:53.438 12:42:51 -- common/autotest_common.sh@941 -- # uname 00:14:53.438 12:42:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.438 12:42:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1182065 00:14:53.438 12:42:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:53.438 12:42:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:53.438 12:42:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1182065' 00:14:53.438 killing process with pid 1182065 00:14:53.438 12:42:51 -- common/autotest_common.sh@955 -- # kill 1182065 00:14:53.438 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.438 00:14:53.438 Latency(us) 00:14:53.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.438 =================================================================================================================== 00:14:53.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.438 [2024-04-16 12:42:51.162729] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:53.438 12:42:51 -- common/autotest_common.sh@960 -- # wait 1182065 00:14:53.438 12:42:51 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X0Tnjdad5p 00:14:53.438 12:42:51 -- common/autotest_common.sh@638 -- # local es=0 00:14:53.438 12:42:51 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X0Tnjdad5p 00:14:53.438 12:42:51 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:53.438 12:42:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.438 12:42:51 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:53.438 12:42:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.438 12:42:51 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X0Tnjdad5p 00:14:53.438 12:42:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:53.438 12:42:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:53.438 12:42:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:53.438 12:42:51 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.X0Tnjdad5p' 00:14:53.438 12:42:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:53.438 12:42:51 -- target/tls.sh@28 -- # bdevperf_pid=1183379 00:14:53.438 12:42:51 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:53.438 12:42:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.438 12:42:51 -- target/tls.sh@31 -- # waitforlisten 1183379 /var/tmp/bdevperf.sock 00:14:53.438 12:42:51 -- common/autotest_common.sh@817 -- # '[' -z 1183379 ']' 00:14:53.438 12:42:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.438 12:42:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:53.438 12:42:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.438 12:42:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:53.438 12:42:51 -- common/autotest_common.sh@10 -- # set +x 00:14:53.438 [2024-04-16 12:42:51.470205] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:53.438 [2024-04-16 12:42:51.470279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183379 ] 00:14:53.438 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.438 [2024-04-16 12:42:51.538277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.438 [2024-04-16 12:42:51.643636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.438 12:42:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:53.438 12:42:51 -- common/autotest_common.sh@850 -- # return 0 00:14:53.438 12:42:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X0Tnjdad5p 00:14:53.438 [2024-04-16 12:42:51.998239] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.438 [2024-04-16 12:42:51.998378] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:53.438 [2024-04-16 12:42:52.004310] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:53.438 [2024-04-16 12:42:52.004513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036000 (107): Transport endpoint is not connected 00:14:53.438 [2024-04-16 12:42:52.005501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036000 (9): Bad file descriptor 00:14:53.438 [2024-04-16 12:42:52.006500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:53.438 [2024-04-16 12:42:52.006519] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:53.438 [2024-04-16 12:42:52.006547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:53.438 request: 00:14:53.438 { 00:14:53.438 "name": "TLSTEST", 00:14:53.438 "trtype": "tcp", 00:14:53.438 "traddr": "10.0.0.2", 00:14:53.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.438 "adrfam": "ipv4", 00:14:53.438 "trsvcid": "4420", 00:14:53.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.438 "psk": "/tmp/tmp.X0Tnjdad5p", 00:14:53.438 "method": "bdev_nvme_attach_controller", 00:14:53.438 "req_id": 1 00:14:53.438 } 00:14:53.438 Got JSON-RPC error response 00:14:53.438 response: 00:14:53.438 { 00:14:53.438 "code": -32602, 00:14:53.438 "message": "Invalid parameters" 00:14:53.438 } 00:14:53.438 12:42:52 -- target/tls.sh@36 -- # killprocess 1183379 00:14:53.438 12:42:52 -- common/autotest_common.sh@936 -- # '[' -z 1183379 ']' 00:14:53.438 12:42:52 -- common/autotest_common.sh@940 -- # kill -0 1183379 00:14:53.438 12:42:52 -- common/autotest_common.sh@941 -- # uname 00:14:53.438 12:42:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.439 12:42:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1183379 00:14:53.439 12:42:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:53.439 12:42:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:53.439 12:42:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1183379' 00:14:53.439 killing process with pid 1183379 00:14:53.439 12:42:52 -- common/autotest_common.sh@955 -- # kill 1183379 00:14:53.439 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.439 00:14:53.439 Latency(us) 00:14:53.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.439 =================================================================================================================== 00:14:53.439 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.439 [2024-04-16 12:42:52.058997] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:53.439 12:42:52 -- common/autotest_common.sh@960 -- # wait 1183379 00:14:53.439 12:42:52 -- target/tls.sh@37 -- # return 1 00:14:53.439 12:42:52 -- common/autotest_common.sh@641 -- # es=1 00:14:53.439 12:42:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:53.439 12:42:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:53.439 12:42:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:53.439 12:42:52 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.T8CamQrp4N 00:14:53.439 12:42:52 -- common/autotest_common.sh@638 -- # local es=0 00:14:53.439 12:42:52 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.T8CamQrp4N 00:14:53.439 12:42:52 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:53.439 12:42:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.439 12:42:52 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:53.439 12:42:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.439 12:42:52 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.T8CamQrp4N 00:14:53.439 12:42:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:53.439 12:42:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:53.439 12:42:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:53.439 12:42:52 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.T8CamQrp4N' 00:14:53.439 12:42:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:53.439 12:42:52 -- target/tls.sh@28 -- # bdevperf_pid=1183519 00:14:53.439 12:42:52 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:53.439 12:42:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.439 12:42:52 -- target/tls.sh@31 -- # waitforlisten 1183519 /var/tmp/bdevperf.sock 00:14:53.439 12:42:52 -- common/autotest_common.sh@817 -- # '[' -z 1183519 ']' 00:14:53.439 12:42:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.439 12:42:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:53.439 12:42:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.439 12:42:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:53.439 12:42:52 -- common/autotest_common.sh@10 -- # set +x 00:14:53.439 [2024-04-16 12:42:52.359803] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:53.439 [2024-04-16 12:42:52.359886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183519 ] 00:14:53.439 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.439 [2024-04-16 12:42:52.429686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.697 [2024-04-16 12:42:52.540705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.697 12:42:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:53.697 12:42:52 -- common/autotest_common.sh@850 -- # return 0 00:14:53.697 12:42:52 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.T8CamQrp4N 00:14:53.955 [2024-04-16 12:42:52.871022] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.955 [2024-04-16 12:42:52.871142] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:53.955 [2024-04-16 12:42:52.876429] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.955 [2024-04-16 12:42:52.876468] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.955 [2024-04-16 12:42:52.876528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:53.955 [2024-04-16 12:42:52.877029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x712000 (107): Transport endpoint is not connected 00:14:53.955 [2024-04-16 12:42:52.878006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x712000 (9): Bad file descriptor 00:14:53.955 [2024-04-16 12:42:52.879004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:53.955 [2024-04-16 12:42:52.879024] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:53.955 [2024-04-16 12:42:52.879051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:53.955 request: 00:14:53.955 { 00:14:53.955 "name": "TLSTEST", 00:14:53.955 "trtype": "tcp", 00:14:53.955 "traddr": "10.0.0.2", 00:14:53.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:53.955 "adrfam": "ipv4", 00:14:53.955 "trsvcid": "4420", 00:14:53.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.955 "psk": "/tmp/tmp.T8CamQrp4N", 00:14:53.955 "method": "bdev_nvme_attach_controller", 00:14:53.955 "req_id": 1 00:14:53.955 } 00:14:53.955 Got JSON-RPC error response 00:14:53.955 response: 00:14:53.955 { 00:14:53.955 "code": -32602, 00:14:53.955 "message": "Invalid parameters" 00:14:53.955 } 00:14:53.955 12:42:52 -- target/tls.sh@36 -- # killprocess 1183519 00:14:53.955 12:42:52 -- common/autotest_common.sh@936 -- # '[' -z 1183519 ']' 00:14:53.955 12:42:52 -- common/autotest_common.sh@940 -- # kill -0 1183519 00:14:53.955 12:42:52 -- common/autotest_common.sh@941 -- # uname 00:14:53.955 12:42:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.955 12:42:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1183519 00:14:53.955 12:42:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:53.955 12:42:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:53.955 12:42:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1183519' 00:14:53.955 killing process with pid 1183519 00:14:53.955 12:42:52 -- common/autotest_common.sh@955 -- # kill 1183519 00:14:53.955 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.955 00:14:53.955 Latency(us) 00:14:53.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.955 =================================================================================================================== 00:14:53.955 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.955 [2024-04-16 12:42:52.931547] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:53.955 12:42:52 -- common/autotest_common.sh@960 -- # wait 1183519 00:14:54.213 12:42:53 -- target/tls.sh@37 -- # return 1 00:14:54.213 12:42:53 -- common/autotest_common.sh@641 -- # es=1 00:14:54.213 12:42:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:54.213 12:42:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:54.213 12:42:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:54.213 12:42:53 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.T8CamQrp4N 00:14:54.213 12:42:53 -- common/autotest_common.sh@638 -- # local es=0 00:14:54.213 12:42:53 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.T8CamQrp4N 00:14:54.213 12:42:53 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:54.213 12:42:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:54.214 12:42:53 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:54.214 12:42:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:54.214 12:42:53 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.T8CamQrp4N 00:14:54.214 12:42:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:54.214 12:42:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:54.214 12:42:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:54.214 12:42:53 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.T8CamQrp4N' 00:14:54.214 12:42:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.214 12:42:53 -- target/tls.sh@28 -- # bdevperf_pid=1183657 00:14:54.214 12:42:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.214 12:42:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.214 12:42:53 -- target/tls.sh@31 -- # waitforlisten 1183657 /var/tmp/bdevperf.sock 00:14:54.214 12:42:53 -- common/autotest_common.sh@817 -- # '[' -z 1183657 ']' 00:14:54.214 12:42:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.214 12:42:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:54.214 12:42:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.214 12:42:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:54.214 12:42:53 -- common/autotest_common.sh@10 -- # set +x 00:14:54.214 [2024-04-16 12:42:53.210939] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:54.214 [2024-04-16 12:42:53.211012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183657 ] 00:14:54.214 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.214 [2024-04-16 12:42:53.278034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.472 [2024-04-16 12:42:53.383610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.472 12:42:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:54.472 12:42:53 -- common/autotest_common.sh@850 -- # return 0 00:14:54.472 12:42:53 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8CamQrp4N 00:14:54.730 [2024-04-16 12:42:53.703760] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.730 [2024-04-16 12:42:53.703860] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:54.730 [2024-04-16 12:42:53.709347] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:54.730 [2024-04-16 12:42:53.709378] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:54.730 [2024-04-16 12:42:53.709432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:54.730 [2024-04-16 12:42:53.709944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1120000 (107): Transport endpoint is not connected 00:14:54.730 [2024-04-16 12:42:53.710931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1120000 (9): Bad file descriptor 00:14:54.730 [2024-04-16 12:42:53.711930] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:54.730 [2024-04-16 12:42:53.711949] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:54.730 [2024-04-16 12:42:53.711976] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:54.730 request: 00:14:54.730 { 00:14:54.730 "name": "TLSTEST", 00:14:54.730 "trtype": "tcp", 00:14:54.730 "traddr": "10.0.0.2", 00:14:54.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.730 "adrfam": "ipv4", 00:14:54.730 "trsvcid": "4420", 00:14:54.730 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:54.730 "psk": "/tmp/tmp.T8CamQrp4N", 00:14:54.730 "method": "bdev_nvme_attach_controller", 00:14:54.730 "req_id": 1 00:14:54.730 } 00:14:54.730 Got JSON-RPC error response 00:14:54.730 response: 00:14:54.730 { 00:14:54.730 "code": -32602, 00:14:54.730 "message": "Invalid parameters" 00:14:54.730 } 00:14:54.730 12:42:53 -- target/tls.sh@36 -- # killprocess 1183657 00:14:54.730 12:42:53 -- common/autotest_common.sh@936 -- # '[' -z 1183657 ']' 00:14:54.730 12:42:53 -- common/autotest_common.sh@940 -- # kill -0 1183657 00:14:54.730 12:42:53 -- common/autotest_common.sh@941 -- # uname 00:14:54.730 12:42:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.730 12:42:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1183657 00:14:54.730 12:42:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:54.730 12:42:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:54.730 12:42:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1183657' 00:14:54.730 killing process with pid 1183657 00:14:54.730 12:42:53 -- common/autotest_common.sh@955 -- # kill 1183657 00:14:54.730 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.730 00:14:54.730 Latency(us) 00:14:54.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.730 =================================================================================================================== 00:14:54.730 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.730 [2024-04-16 12:42:53.764315] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:54.730 12:42:53 -- common/autotest_common.sh@960 -- # wait 1183657 00:14:54.988 12:42:54 -- target/tls.sh@37 -- # return 1 00:14:54.988 12:42:54 -- common/autotest_common.sh@641 -- # es=1 00:14:54.988 12:42:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:54.988 12:42:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:54.988 12:42:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:54.988 12:42:54 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:54.988 12:42:54 -- common/autotest_common.sh@638 -- # local es=0 00:14:54.988 12:42:54 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:54.988 12:42:54 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:54.988 12:42:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:54.988 12:42:54 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:54.988 12:42:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:54.988 12:42:54 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:54.988 12:42:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:54.988 12:42:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:54.988 12:42:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:54.988 12:42:54 -- target/tls.sh@23 -- # psk= 00:14:54.988 12:42:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.988 12:42:54 -- target/tls.sh@28 -- # bdevperf_pid=1183678 00:14:54.988 12:42:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.988 12:42:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.988 12:42:54 -- target/tls.sh@31 -- # waitforlisten 1183678 /var/tmp/bdevperf.sock 00:14:54.988 12:42:54 -- common/autotest_common.sh@817 -- # '[' -z 1183678 ']' 00:14:54.988 12:42:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.988 12:42:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:54.988 12:42:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.988 12:42:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:54.988 12:42:54 -- common/autotest_common.sh@10 -- # set +x 00:14:55.246 [2024-04-16 12:42:54.069727] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:55.246 [2024-04-16 12:42:54.069803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183678 ] 00:14:55.246 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.246 [2024-04-16 12:42:54.140468] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.247 [2024-04-16 12:42:54.245309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.504 12:42:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:55.504 12:42:54 -- common/autotest_common.sh@850 -- # return 0 00:14:55.504 12:42:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:55.762 [2024-04-16 12:42:54.581652] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:55.762 [2024-04-16 12:42:54.583509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16329b0 (9): Bad file descriptor 00:14:55.762 [2024-04-16 12:42:54.584505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:55.762 [2024-04-16 12:42:54.584524] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:55.762 [2024-04-16 12:42:54.584559] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:55.762 request: 00:14:55.762 { 00:14:55.762 "name": "TLSTEST", 00:14:55.762 "trtype": "tcp", 00:14:55.763 "traddr": "10.0.0.2", 00:14:55.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.763 "adrfam": "ipv4", 00:14:55.763 "trsvcid": "4420", 00:14:55.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.763 "method": "bdev_nvme_attach_controller", 00:14:55.763 "req_id": 1 00:14:55.763 } 00:14:55.763 Got JSON-RPC error response 00:14:55.763 response: 00:14:55.763 { 00:14:55.763 "code": -32602, 00:14:55.763 "message": "Invalid parameters" 00:14:55.763 } 00:14:55.763 12:42:54 -- target/tls.sh@36 -- # killprocess 1183678 00:14:55.763 12:42:54 -- common/autotest_common.sh@936 -- # '[' -z 1183678 ']' 00:14:55.763 12:42:54 -- common/autotest_common.sh@940 -- # kill -0 1183678 00:14:55.763 12:42:54 -- common/autotest_common.sh@941 -- # uname 00:14:55.763 12:42:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.763 12:42:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1183678 00:14:55.763 12:42:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:55.763 12:42:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:55.763 12:42:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1183678' 00:14:55.763 killing process with pid 1183678 00:14:55.763 12:42:54 -- common/autotest_common.sh@955 -- # kill 1183678 00:14:55.763 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.763 00:14:55.763 Latency(us) 00:14:55.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.763 =================================================================================================================== 00:14:55.763 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.763 12:42:54 -- common/autotest_common.sh@960 -- # wait 1183678 00:14:56.021 12:42:54 -- target/tls.sh@37 -- # return 1 00:14:56.021 12:42:54 -- common/autotest_common.sh@641 -- # es=1 00:14:56.021 12:42:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:56.021 12:42:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:56.021 12:42:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:56.021 12:42:54 -- target/tls.sh@158 -- # killprocess 1180288 00:14:56.021 12:42:54 -- common/autotest_common.sh@936 -- # '[' -z 1180288 ']' 00:14:56.021 12:42:54 -- common/autotest_common.sh@940 -- # kill -0 1180288 00:14:56.021 12:42:54 -- common/autotest_common.sh@941 -- # uname 00:14:56.021 12:42:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.021 12:42:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1180288 00:14:56.021 12:42:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:56.021 12:42:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:56.021 12:42:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1180288' 00:14:56.021 killing process with pid 1180288 00:14:56.021 12:42:54 -- common/autotest_common.sh@955 -- # kill 1180288 00:14:56.021 [2024-04-16 12:42:54.891259] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:56.021 12:42:54 -- common/autotest_common.sh@960 -- # wait 1180288 00:14:56.281 12:42:55 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:56.281 12:42:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:56.281 12:42:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:56.281 12:42:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:56.281 12:42:55 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:56.281 12:42:55 -- nvmf/common.sh@693 -- # digest=2 00:14:56.281 12:42:55 -- nvmf/common.sh@694 -- # python - 00:14:56.281 12:42:55 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:56.281 12:42:55 -- target/tls.sh@160 -- # mktemp 00:14:56.281 12:42:55 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.jAm0VxVmIz 00:14:56.281 12:42:55 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:56.281 12:42:55 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.jAm0VxVmIz 00:14:56.281 12:42:55 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:56.281 12:42:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:56.281 12:42:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:56.281 12:42:55 -- common/autotest_common.sh@10 -- # set +x 00:14:56.281 12:42:55 -- nvmf/common.sh@470 -- # nvmfpid=1183849 00:14:56.281 12:42:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.281 12:42:55 -- nvmf/common.sh@471 -- # waitforlisten 1183849 00:14:56.281 12:42:55 -- common/autotest_common.sh@817 -- # '[' -z 1183849 ']' 00:14:56.281 12:42:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.281 12:42:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:56.281 12:42:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.281 12:42:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:56.281 12:42:55 -- common/autotest_common.sh@10 -- # set +x 00:14:56.281 [2024-04-16 12:42:55.273931] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:56.281 [2024-04-16 12:42:55.274024] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.281 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.281 [2024-04-16 12:42:55.347474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.543 [2024-04-16 12:42:55.454774] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.543 [2024-04-16 12:42:55.454842] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.543 [2024-04-16 12:42:55.454856] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.543 [2024-04-16 12:42:55.454868] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.543 [2024-04-16 12:42:55.454877] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.543 [2024-04-16 12:42:55.454905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.543 12:42:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:56.543 12:42:55 -- common/autotest_common.sh@850 -- # return 0 00:14:56.543 12:42:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:56.543 12:42:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:56.543 12:42:55 -- common/autotest_common.sh@10 -- # set +x 00:14:56.543 12:42:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.543 12:42:55 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.jAm0VxVmIz 00:14:56.543 12:42:55 -- target/tls.sh@49 -- # local key=/tmp/tmp.jAm0VxVmIz 00:14:56.543 12:42:55 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:56.799 [2024-04-16 12:42:55.813611] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.799 12:42:55 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:57.055 12:42:56 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:57.312 [2024-04-16 12:42:56.286861] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:57.312 [2024-04-16 12:42:56.287144] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.312 12:42:56 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:57.570 malloc0 00:14:57.570 12:42:56 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:57.828 12:42:56 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jAm0VxVmIz 00:14:58.086 [2024-04-16 12:42:57.036956] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:58.086 12:42:57 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jAm0VxVmIz 00:14:58.086 12:42:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:58.086 12:42:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:58.086 12:42:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:58.086 12:42:57 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jAm0VxVmIz' 00:14:58.086 12:42:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:58.086 12:42:57 -- target/tls.sh@28 -- # bdevperf_pid=1184113 00:14:58.086 12:42:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:58.086 12:42:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:58.086 12:42:57 -- target/tls.sh@31 -- # waitforlisten 1184113 /var/tmp/bdevperf.sock 00:14:58.086 12:42:57 -- common/autotest_common.sh@817 -- # '[' -z 1184113 ']' 00:14:58.086 12:42:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.086 12:42:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:58.086 12:42:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.086 12:42:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:58.086 12:42:57 -- common/autotest_common.sh@10 -- # set +x 00:14:58.086 [2024-04-16 12:42:57.101009] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:14:58.086 [2024-04-16 12:42:57.101078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184113 ] 00:14:58.086 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.344 [2024-04-16 12:42:57.167874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.344 [2024-04-16 12:42:57.270071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.344 12:42:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.344 12:42:57 -- common/autotest_common.sh@850 -- # return 0 00:14:58.344 12:42:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jAm0VxVmIz 00:14:58.602 [2024-04-16 12:42:57.624906] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:58.602 [2024-04-16 12:42:57.625036] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:58.860 TLSTESTn1 00:14:58.860 12:42:57 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:58.860 Running I/O for 10 seconds... 00:15:08.825 00:15:08.825 Latency(us) 00:15:08.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.825 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:08.825 Verification LBA range: start 0x0 length 0x2000 00:15:08.825 TLSTESTn1 : 10.03 3420.90 13.36 0.00 0.00 37336.26 9223.59 98643.82 00:15:08.825 =================================================================================================================== 00:15:08.825 Total : 3420.90 13.36 0.00 0.00 37336.26 9223.59 98643.82 00:15:08.825 0 00:15:08.825 12:43:07 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.825 12:43:07 -- target/tls.sh@45 -- # killprocess 1184113 00:15:08.825 12:43:07 -- common/autotest_common.sh@936 -- # '[' -z 1184113 ']' 00:15:08.825 12:43:07 -- common/autotest_common.sh@940 -- # kill -0 1184113 00:15:08.825 12:43:07 -- common/autotest_common.sh@941 -- # uname 00:15:09.084 12:43:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.084 12:43:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1184113 00:15:09.084 12:43:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:09.084 12:43:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:09.084 12:43:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1184113' 00:15:09.084 killing process with pid 1184113 00:15:09.084 12:43:07 -- common/autotest_common.sh@955 -- # kill 1184113 00:15:09.084 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.084 00:15:09.084 Latency(us) 00:15:09.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.084 =================================================================================================================== 00:15:09.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.084 [2024-04-16 12:43:07.923715] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:09.084 12:43:07 -- common/autotest_common.sh@960 -- # wait 1184113 00:15:09.342 12:43:08 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.jAm0VxVmIz 00:15:09.342 12:43:08 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jAm0VxVmIz 00:15:09.342 12:43:08 -- common/autotest_common.sh@638 -- # local es=0 00:15:09.342 12:43:08 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jAm0VxVmIz 00:15:09.342 12:43:08 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:09.342 12:43:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:09.342 12:43:08 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:09.342 12:43:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:09.342 12:43:08 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jAm0VxVmIz 00:15:09.342 12:43:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:09.342 12:43:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:09.342 12:43:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:09.342 12:43:08 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jAm0VxVmIz' 00:15:09.342 12:43:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:09.342 12:43:08 -- target/tls.sh@28 -- # bdevperf_pid=1185431 00:15:09.342 12:43:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:09.342 12:43:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:09.342 12:43:08 -- target/tls.sh@31 -- # waitforlisten 1185431 /var/tmp/bdevperf.sock 00:15:09.342 12:43:08 -- common/autotest_common.sh@817 -- # '[' -z 1185431 ']' 00:15:09.342 12:43:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.342 12:43:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:09.342 12:43:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.342 12:43:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:09.342 12:43:08 -- common/autotest_common.sh@10 -- # set +x 00:15:09.342 [2024-04-16 12:43:08.240044] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:09.342 [2024-04-16 12:43:08.240133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185431 ] 00:15:09.342 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.342 [2024-04-16 12:43:08.307908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.342 [2024-04-16 12:43:08.408420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.601 12:43:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.601 12:43:08 -- common/autotest_common.sh@850 -- # return 0 00:15:09.601 12:43:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jAm0VxVmIz 00:15:09.860 [2024-04-16 12:43:08.754292] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:09.860 [2024-04-16 12:43:08.754367] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:09.860 [2024-04-16 12:43:08.754382] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.jAm0VxVmIz 00:15:09.860 request: 00:15:09.860 { 00:15:09.860 "name": "TLSTEST", 00:15:09.860 "trtype": "tcp", 00:15:09.860 "traddr": "10.0.0.2", 00:15:09.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.860 "adrfam": "ipv4", 00:15:09.860 "trsvcid": "4420", 00:15:09.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.860 "psk": "/tmp/tmp.jAm0VxVmIz", 00:15:09.860 "method": "bdev_nvme_attach_controller", 00:15:09.860 "req_id": 1 00:15:09.860 } 00:15:09.860 Got JSON-RPC error response 00:15:09.860 response: 00:15:09.860 { 00:15:09.860 "code": -1, 00:15:09.860 "message": "Operation not permitted" 00:15:09.860 } 00:15:09.860 12:43:08 -- target/tls.sh@36 -- # killprocess 1185431 00:15:09.860 12:43:08 -- common/autotest_common.sh@936 -- # '[' -z 1185431 ']' 00:15:09.860 12:43:08 -- common/autotest_common.sh@940 -- # kill -0 1185431 00:15:09.860 12:43:08 -- common/autotest_common.sh@941 -- # uname 00:15:09.860 12:43:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.860 12:43:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1185431 00:15:09.860 12:43:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:09.860 12:43:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:09.860 12:43:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1185431' 00:15:09.860 killing process with pid 1185431 00:15:09.860 12:43:08 -- common/autotest_common.sh@955 -- # kill 1185431 00:15:09.860 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.860 00:15:09.860 Latency(us) 00:15:09.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.860 =================================================================================================================== 00:15:09.860 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:09.860 12:43:08 -- common/autotest_common.sh@960 -- # wait 1185431 00:15:10.119 12:43:09 -- target/tls.sh@37 -- # return 1 00:15:10.119 12:43:09 -- common/autotest_common.sh@641 -- # es=1 00:15:10.119 12:43:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:10.119 12:43:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:10.119 12:43:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:10.119 12:43:09 -- target/tls.sh@174 -- # killprocess 1183849 00:15:10.119 12:43:09 -- common/autotest_common.sh@936 -- # '[' -z 1183849 ']' 00:15:10.119 12:43:09 -- common/autotest_common.sh@940 -- # kill -0 1183849 00:15:10.119 12:43:09 -- common/autotest_common.sh@941 -- # uname 00:15:10.119 12:43:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.119 12:43:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1183849 00:15:10.119 12:43:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:10.119 12:43:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:10.119 12:43:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1183849' 00:15:10.119 killing process with pid 1183849 00:15:10.119 12:43:09 -- common/autotest_common.sh@955 -- # kill 1183849 00:15:10.119 [2024-04-16 12:43:09.066464] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:10.119 12:43:09 -- common/autotest_common.sh@960 -- # wait 1183849 00:15:10.378 12:43:09 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:10.378 12:43:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:10.378 12:43:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:10.378 12:43:09 -- common/autotest_common.sh@10 -- # set +x 00:15:10.378 12:43:09 -- nvmf/common.sh@470 -- # nvmfpid=1185577 00:15:10.378 12:43:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:10.378 12:43:09 -- nvmf/common.sh@471 -- # waitforlisten 1185577 00:15:10.378 12:43:09 -- common/autotest_common.sh@817 -- # '[' -z 1185577 ']' 00:15:10.378 12:43:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.378 12:43:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:10.378 12:43:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.378 12:43:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:10.378 12:43:09 -- common/autotest_common.sh@10 -- # set +x 00:15:10.378 [2024-04-16 12:43:09.421183] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:10.378 [2024-04-16 12:43:09.421277] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.637 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.637 [2024-04-16 12:43:09.501029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.637 [2024-04-16 12:43:09.617430] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.637 [2024-04-16 12:43:09.617489] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.637 [2024-04-16 12:43:09.617505] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.637 [2024-04-16 12:43:09.617520] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.637 [2024-04-16 12:43:09.617532] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.637 [2024-04-16 12:43:09.617571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.577 12:43:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:11.577 12:43:10 -- common/autotest_common.sh@850 -- # return 0 00:15:11.577 12:43:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:11.577 12:43:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:11.577 12:43:10 -- common/autotest_common.sh@10 -- # set +x 00:15:11.577 12:43:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.577 12:43:10 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.jAm0VxVmIz 00:15:11.577 12:43:10 -- common/autotest_common.sh@638 -- # local es=0 00:15:11.577 12:43:10 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jAm0VxVmIz 00:15:11.577 12:43:10 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:15:11.577 12:43:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.577 12:43:10 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:15:11.577 12:43:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:11.577 12:43:10 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.jAm0VxVmIz 00:15:11.577 12:43:10 -- target/tls.sh@49 -- # local key=/tmp/tmp.jAm0VxVmIz 00:15:11.577 12:43:10 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:11.577 [2024-04-16 12:43:10.574380] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.577 12:43:10 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:11.835 12:43:10 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:12.093 [2024-04-16 12:43:11.071706] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:12.093 [2024-04-16 12:43:11.071962] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.093 12:43:11 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:12.351 malloc0 00:15:12.351 12:43:11 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:12.609 12:43:11 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jAm0VxVmIz 00:15:12.867 [2024-04-16 12:43:11.801583] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:12.867 [2024-04-16 12:43:11.801640] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:12.867 [2024-04-16 12:43:11.801678] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:15:12.867 request: 00:15:12.867 { 00:15:12.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.867 "host": "nqn.2016-06.io.spdk:host1", 00:15:12.867 "psk": "/tmp/tmp.jAm0VxVmIz", 00:15:12.867 "method": "nvmf_subsystem_add_host", 00:15:12.867 "req_id": 1 00:15:12.867 } 00:15:12.867 Got JSON-RPC error response 00:15:12.867 response: 00:15:12.867 { 00:15:12.867 "code": -32603, 00:15:12.867 "message": "Internal error" 00:15:12.867 } 00:15:12.867 12:43:11 -- common/autotest_common.sh@641 -- # es=1 00:15:12.867 12:43:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:12.867 12:43:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:12.867 12:43:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:12.867 12:43:11 -- target/tls.sh@180 -- # killprocess 1185577 00:15:12.867 12:43:11 -- common/autotest_common.sh@936 -- # '[' -z 1185577 ']' 00:15:12.867 12:43:11 -- common/autotest_common.sh@940 -- # kill -0 1185577 00:15:12.867 12:43:11 -- common/autotest_common.sh@941 -- # uname 00:15:12.867 12:43:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.867 12:43:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1185577 00:15:12.867 12:43:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:12.867 12:43:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:12.867 12:43:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1185577' 00:15:12.867 killing process with pid 1185577 00:15:12.867 12:43:11 -- common/autotest_common.sh@955 -- # kill 1185577 00:15:12.867 12:43:11 -- common/autotest_common.sh@960 -- # wait 1185577 00:15:13.126 12:43:12 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.jAm0VxVmIz 00:15:13.126 12:43:12 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:13.126 12:43:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:13.126 12:43:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:13.126 12:43:12 -- common/autotest_common.sh@10 -- # set +x 00:15:13.126 12:43:12 -- nvmf/common.sh@470 -- # nvmfpid=1186003 00:15:13.126 12:43:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.126 12:43:12 -- nvmf/common.sh@471 -- # waitforlisten 1186003 00:15:13.126 12:43:12 -- common/autotest_common.sh@817 -- # '[' -z 1186003 ']' 00:15:13.126 12:43:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.126 12:43:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.126 12:43:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.126 12:43:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.126 12:43:12 -- common/autotest_common.sh@10 -- # set +x 00:15:13.126 [2024-04-16 12:43:12.182772] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:13.126 [2024-04-16 12:43:12.182862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.384 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.384 [2024-04-16 12:43:12.262342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.384 [2024-04-16 12:43:12.373980] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.384 [2024-04-16 12:43:12.374064] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.384 [2024-04-16 12:43:12.374081] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.384 [2024-04-16 12:43:12.374096] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.384 [2024-04-16 12:43:12.374107] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.384 [2024-04-16 12:43:12.374152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.318 12:43:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:14.318 12:43:13 -- common/autotest_common.sh@850 -- # return 0 00:15:14.318 12:43:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:14.318 12:43:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:14.318 12:43:13 -- common/autotest_common.sh@10 -- # set +x 00:15:14.318 12:43:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.318 12:43:13 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.jAm0VxVmIz 00:15:14.318 12:43:13 -- target/tls.sh@49 -- # local key=/tmp/tmp.jAm0VxVmIz 00:15:14.318 12:43:13 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:14.318 [2024-04-16 12:43:13.358241] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.318 12:43:13 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:14.576 12:43:13 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:14.834 [2024-04-16 12:43:13.831455] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:14.834 [2024-04-16 12:43:13.831714] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.834 12:43:13 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:15.092 malloc0 00:15:15.092 12:43:14 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:15.382 12:43:14 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jAm0VxVmIz 00:15:15.641 [2024-04-16 12:43:14.565195] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:15.641 12:43:14 -- target/tls.sh@188 -- # bdevperf_pid=1186289 00:15:15.641 12:43:14 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:15.641 12:43:14 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.641 12:43:14 -- target/tls.sh@191 -- # waitforlisten 1186289 /var/tmp/bdevperf.sock 00:15:15.641 12:43:14 -- common/autotest_common.sh@817 -- # '[' -z 1186289 ']' 00:15:15.641 12:43:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.641 12:43:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:15.641 12:43:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.641 12:43:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:15.641 12:43:14 -- common/autotest_common.sh@10 -- # set +x 00:15:15.641 [2024-04-16 12:43:14.625187] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:15.641 [2024-04-16 12:43:14.625274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186289 ] 00:15:15.641 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.641 [2024-04-16 12:43:14.696378] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.899 [2024-04-16 12:43:14.802362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.899 12:43:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:15.899 12:43:14 -- common/autotest_common.sh@850 -- # return 0 00:15:15.899 12:43:14 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jAm0VxVmIz 00:15:16.157 [2024-04-16 12:43:15.138150] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:16.157 [2024-04-16 12:43:15.138263] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:16.157 TLSTESTn1 00:15:16.157 12:43:15 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:15:16.724 12:43:15 -- target/tls.sh@196 -- # tgtconf='{ 00:15:16.724 "subsystems": [ 00:15:16.724 { 00:15:16.724 "subsystem": "keyring", 00:15:16.724 "config": [] 00:15:16.724 }, 00:15:16.724 { 00:15:16.724 "subsystem": "iobuf", 00:15:16.724 "config": [ 00:15:16.724 { 00:15:16.724 "method": "iobuf_set_options", 00:15:16.724 "params": { 00:15:16.724 "small_pool_count": 8192, 00:15:16.724 "large_pool_count": 1024, 00:15:16.724 "small_bufsize": 8192, 00:15:16.724 "large_bufsize": 135168 00:15:16.724 } 00:15:16.724 } 00:15:16.724 ] 00:15:16.724 }, 00:15:16.724 { 00:15:16.724 "subsystem": "sock", 00:15:16.724 "config": [ 00:15:16.724 { 00:15:16.724 "method": "sock_impl_set_options", 00:15:16.724 "params": { 00:15:16.724 "impl_name": "posix", 00:15:16.724 "recv_buf_size": 2097152, 00:15:16.724 "send_buf_size": 2097152, 00:15:16.724 "enable_recv_pipe": true, 00:15:16.724 "enable_quickack": false, 00:15:16.724 "enable_placement_id": 0, 00:15:16.724 "enable_zerocopy_send_server": true, 00:15:16.724 "enable_zerocopy_send_client": false, 00:15:16.724 "zerocopy_threshold": 0, 00:15:16.724 "tls_version": 0, 00:15:16.724 "enable_ktls": false 00:15:16.724 } 00:15:16.724 }, 00:15:16.724 { 00:15:16.724 "method": "sock_impl_set_options", 00:15:16.724 "params": { 00:15:16.724 "impl_name": "ssl", 00:15:16.724 "recv_buf_size": 4096, 00:15:16.724 "send_buf_size": 4096, 00:15:16.724 "enable_recv_pipe": true, 00:15:16.724 "enable_quickack": false, 00:15:16.724 "enable_placement_id": 0, 00:15:16.724 "enable_zerocopy_send_server": true, 00:15:16.724 "enable_zerocopy_send_client": false, 00:15:16.724 "zerocopy_threshold": 0, 00:15:16.724 "tls_version": 0, 00:15:16.724 "enable_ktls": false 00:15:16.724 } 00:15:16.724 } 00:15:16.724 ] 00:15:16.724 }, 00:15:16.724 { 00:15:16.724 "subsystem": "vmd", 00:15:16.724 "config": [] 00:15:16.724 }, 00:15:16.724 { 00:15:16.724 "subsystem": "accel", 00:15:16.724 "config": [ 00:15:16.724 { 00:15:16.724 "method": "accel_set_options", 00:15:16.724 "params": { 00:15:16.724 "small_cache_size": 128, 00:15:16.724 "large_cache_size": 16, 00:15:16.724 "task_count": 2048, 00:15:16.724 "sequence_count": 2048, 00:15:16.724 "buf_count": 2048 00:15:16.724 } 00:15:16.724 } 00:15:16.724 ] 00:15:16.724 }, 00:15:16.724 { 00:15:16.724 "subsystem": "bdev", 00:15:16.724 "config": [ 00:15:16.724 { 00:15:16.724 "method": "bdev_set_options", 00:15:16.724 "params": { 00:15:16.724 "bdev_io_pool_size": 65535, 00:15:16.724 "bdev_io_cache_size": 256, 00:15:16.725 "bdev_auto_examine": true, 00:15:16.725 "iobuf_small_cache_size": 128, 00:15:16.725 "iobuf_large_cache_size": 16 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "bdev_raid_set_options", 00:15:16.725 "params": { 00:15:16.725 "process_window_size_kb": 1024 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "bdev_iscsi_set_options", 00:15:16.725 "params": { 00:15:16.725 "timeout_sec": 30 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "bdev_nvme_set_options", 00:15:16.725 "params": { 00:15:16.725 "action_on_timeout": "none", 00:15:16.725 "timeout_us": 0, 00:15:16.725 "timeout_admin_us": 0, 00:15:16.725 "keep_alive_timeout_ms": 10000, 00:15:16.725 "arbitration_burst": 0, 00:15:16.725 "low_priority_weight": 0, 00:15:16.725 "medium_priority_weight": 0, 00:15:16.725 "high_priority_weight": 0, 00:15:16.725 "nvme_adminq_poll_period_us": 10000, 00:15:16.725 "nvme_ioq_poll_period_us": 0, 00:15:16.725 "io_queue_requests": 0, 00:15:16.725 "delay_cmd_submit": true, 00:15:16.725 "transport_retry_count": 4, 00:15:16.725 "bdev_retry_count": 3, 00:15:16.725 "transport_ack_timeout": 0, 00:15:16.725 "ctrlr_loss_timeout_sec": 0, 00:15:16.725 "reconnect_delay_sec": 0, 00:15:16.725 "fast_io_fail_timeout_sec": 0, 00:15:16.725 "disable_auto_failback": false, 00:15:16.725 "generate_uuids": false, 00:15:16.725 "transport_tos": 0, 00:15:16.725 "nvme_error_stat": false, 00:15:16.725 "rdma_srq_size": 0, 00:15:16.725 "io_path_stat": false, 00:15:16.725 "allow_accel_sequence": false, 00:15:16.725 "rdma_max_cq_size": 0, 00:15:16.725 "rdma_cm_event_timeout_ms": 0, 00:15:16.725 "dhchap_digests": [ 00:15:16.725 "sha256", 00:15:16.725 "sha384", 00:15:16.725 "sha512" 00:15:16.725 ], 00:15:16.725 "dhchap_dhgroups": [ 00:15:16.725 "null", 00:15:16.725 "ffdhe2048", 00:15:16.725 "ffdhe3072", 00:15:16.725 "ffdhe4096", 00:15:16.725 "ffdhe6144", 00:15:16.725 "ffdhe8192" 00:15:16.725 ] 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "bdev_nvme_set_hotplug", 00:15:16.725 "params": { 00:15:16.725 "period_us": 100000, 00:15:16.725 "enable": false 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "bdev_malloc_create", 00:15:16.725 "params": { 00:15:16.725 "name": "malloc0", 00:15:16.725 "num_blocks": 8192, 00:15:16.725 "block_size": 4096, 00:15:16.725 "physical_block_size": 4096, 00:15:16.725 "uuid": "409c4a6a-eebd-45ef-813f-9b341c2a207d", 00:15:16.725 "optimal_io_boundary": 0 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "bdev_wait_for_examine" 00:15:16.725 } 00:15:16.725 ] 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "subsystem": "nbd", 00:15:16.725 "config": [] 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "subsystem": "scheduler", 00:15:16.725 "config": [ 00:15:16.725 { 00:15:16.725 "method": "framework_set_scheduler", 00:15:16.725 "params": { 00:15:16.725 "name": "static" 00:15:16.725 } 00:15:16.725 } 00:15:16.725 ] 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "subsystem": "nvmf", 00:15:16.725 "config": [ 00:15:16.725 { 00:15:16.725 "method": "nvmf_set_config", 00:15:16.725 "params": { 00:15:16.725 "discovery_filter": "match_any", 00:15:16.725 "admin_cmd_passthru": { 00:15:16.725 "identify_ctrlr": false 00:15:16.725 } 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "nvmf_set_max_subsystems", 00:15:16.725 "params": { 00:15:16.725 "max_subsystems": 1024 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "nvmf_set_crdt", 00:15:16.725 "params": { 00:15:16.725 "crdt1": 0, 00:15:16.725 "crdt2": 0, 00:15:16.725 "crdt3": 0 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "nvmf_create_transport", 00:15:16.725 "params": { 00:15:16.725 "trtype": "TCP", 00:15:16.725 "max_queue_depth": 128, 00:15:16.725 "max_io_qpairs_per_ctrlr": 127, 00:15:16.725 "in_capsule_data_size": 4096, 00:15:16.725 "max_io_size": 131072, 00:15:16.725 "io_unit_size": 131072, 00:15:16.725 "max_aq_depth": 128, 00:15:16.725 "num_shared_buffers": 511, 00:15:16.725 "buf_cache_size": 4294967295, 00:15:16.725 "dif_insert_or_strip": false, 00:15:16.725 "zcopy": false, 00:15:16.725 "c2h_success": false, 00:15:16.725 "sock_priority": 0, 00:15:16.725 "abort_timeout_sec": 1, 00:15:16.725 "ack_timeout": 0 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "nvmf_create_subsystem", 00:15:16.725 "params": { 00:15:16.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.725 "allow_any_host": false, 00:15:16.725 "serial_number": "SPDK00000000000001", 00:15:16.725 "model_number": "SPDK bdev Controller", 00:15:16.725 "max_namespaces": 10, 00:15:16.725 "min_cntlid": 1, 00:15:16.725 "max_cntlid": 65519, 00:15:16.725 "ana_reporting": false 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "nvmf_subsystem_add_host", 00:15:16.725 "params": { 00:15:16.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.725 "host": "nqn.2016-06.io.spdk:host1", 00:15:16.725 "psk": "/tmp/tmp.jAm0VxVmIz" 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "nvmf_subsystem_add_ns", 00:15:16.725 "params": { 00:15:16.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.725 "namespace": { 00:15:16.725 "nsid": 1, 00:15:16.725 "bdev_name": "malloc0", 00:15:16.725 "nguid": "409C4A6AEEBD45EF813F9B341C2A207D", 00:15:16.725 "uuid": "409c4a6a-eebd-45ef-813f-9b341c2a207d", 00:15:16.725 "no_auto_visible": false 00:15:16.725 } 00:15:16.725 } 00:15:16.725 }, 00:15:16.725 { 00:15:16.725 "method": "nvmf_subsystem_add_listener", 00:15:16.725 "params": { 00:15:16.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.725 "listen_address": { 00:15:16.725 "trtype": "TCP", 00:15:16.725 "adrfam": "IPv4", 00:15:16.725 "traddr": "10.0.0.2", 00:15:16.725 "trsvcid": "4420" 00:15:16.725 }, 00:15:16.725 "secure_channel": true 00:15:16.725 } 00:15:16.725 } 00:15:16.725 ] 00:15:16.725 } 00:15:16.725 ] 00:15:16.725 }' 00:15:16.725 12:43:15 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:16.984 12:43:15 -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:16.984 "subsystems": [ 00:15:16.984 { 00:15:16.984 "subsystem": "keyring", 00:15:16.984 "config": [] 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "subsystem": "iobuf", 00:15:16.984 "config": [ 00:15:16.984 { 00:15:16.984 "method": "iobuf_set_options", 00:15:16.984 "params": { 00:15:16.984 "small_pool_count": 8192, 00:15:16.984 "large_pool_count": 1024, 00:15:16.984 "small_bufsize": 8192, 00:15:16.984 "large_bufsize": 135168 00:15:16.984 } 00:15:16.984 } 00:15:16.984 ] 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "subsystem": "sock", 00:15:16.984 "config": [ 00:15:16.984 { 00:15:16.984 "method": "sock_impl_set_options", 00:15:16.984 "params": { 00:15:16.984 "impl_name": "posix", 00:15:16.984 "recv_buf_size": 2097152, 00:15:16.984 "send_buf_size": 2097152, 00:15:16.984 "enable_recv_pipe": true, 00:15:16.984 "enable_quickack": false, 00:15:16.984 "enable_placement_id": 0, 00:15:16.984 "enable_zerocopy_send_server": true, 00:15:16.984 "enable_zerocopy_send_client": false, 00:15:16.984 "zerocopy_threshold": 0, 00:15:16.984 "tls_version": 0, 00:15:16.984 "enable_ktls": false 00:15:16.984 } 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "method": "sock_impl_set_options", 00:15:16.984 "params": { 00:15:16.984 "impl_name": "ssl", 00:15:16.984 "recv_buf_size": 4096, 00:15:16.984 "send_buf_size": 4096, 00:15:16.984 "enable_recv_pipe": true, 00:15:16.984 "enable_quickack": false, 00:15:16.984 "enable_placement_id": 0, 00:15:16.984 "enable_zerocopy_send_server": true, 00:15:16.984 "enable_zerocopy_send_client": false, 00:15:16.984 "zerocopy_threshold": 0, 00:15:16.984 "tls_version": 0, 00:15:16.984 "enable_ktls": false 00:15:16.984 } 00:15:16.984 } 00:15:16.984 ] 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "subsystem": "vmd", 00:15:16.984 "config": [] 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "subsystem": "accel", 00:15:16.984 "config": [ 00:15:16.984 { 00:15:16.984 "method": "accel_set_options", 00:15:16.984 "params": { 00:15:16.984 "small_cache_size": 128, 00:15:16.984 "large_cache_size": 16, 00:15:16.984 "task_count": 2048, 00:15:16.984 "sequence_count": 2048, 00:15:16.984 "buf_count": 2048 00:15:16.984 } 00:15:16.984 } 00:15:16.984 ] 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "subsystem": "bdev", 00:15:16.984 "config": [ 00:15:16.984 { 00:15:16.984 "method": "bdev_set_options", 00:15:16.984 "params": { 00:15:16.984 "bdev_io_pool_size": 65535, 00:15:16.984 "bdev_io_cache_size": 256, 00:15:16.984 "bdev_auto_examine": true, 00:15:16.984 "iobuf_small_cache_size": 128, 00:15:16.984 "iobuf_large_cache_size": 16 00:15:16.984 } 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "method": "bdev_raid_set_options", 00:15:16.984 "params": { 00:15:16.984 "process_window_size_kb": 1024 00:15:16.984 } 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "method": "bdev_iscsi_set_options", 00:15:16.984 "params": { 00:15:16.984 "timeout_sec": 30 00:15:16.984 } 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "method": "bdev_nvme_set_options", 00:15:16.984 "params": { 00:15:16.984 "action_on_timeout": "none", 00:15:16.984 "timeout_us": 0, 00:15:16.984 "timeout_admin_us": 0, 00:15:16.984 "keep_alive_timeout_ms": 10000, 00:15:16.984 "arbitration_burst": 0, 00:15:16.984 "low_priority_weight": 0, 00:15:16.984 "medium_priority_weight": 0, 00:15:16.984 "high_priority_weight": 0, 00:15:16.984 "nvme_adminq_poll_period_us": 10000, 00:15:16.984 "nvme_ioq_poll_period_us": 0, 00:15:16.984 "io_queue_requests": 512, 00:15:16.984 "delay_cmd_submit": true, 00:15:16.984 "transport_retry_count": 4, 00:15:16.984 "bdev_retry_count": 3, 00:15:16.984 "transport_ack_timeout": 0, 00:15:16.984 "ctrlr_loss_timeout_sec": 0, 00:15:16.984 "reconnect_delay_sec": 0, 00:15:16.984 "fast_io_fail_timeout_sec": 0, 00:15:16.984 "disable_auto_failback": false, 00:15:16.984 "generate_uuids": false, 00:15:16.984 "transport_tos": 0, 00:15:16.984 "nvme_error_stat": false, 00:15:16.984 "rdma_srq_size": 0, 00:15:16.984 "io_path_stat": false, 00:15:16.984 "allow_accel_sequence": false, 00:15:16.984 "rdma_max_cq_size": 0, 00:15:16.984 "rdma_cm_event_timeout_ms": 0, 00:15:16.984 "dhchap_digests": [ 00:15:16.984 "sha256", 00:15:16.984 "sha384", 00:15:16.984 "sha512" 00:15:16.984 ], 00:15:16.984 "dhchap_dhgroups": [ 00:15:16.984 "null", 00:15:16.984 "ffdhe2048", 00:15:16.984 "ffdhe3072", 00:15:16.984 "ffdhe4096", 00:15:16.984 "ffdhe6144", 00:15:16.984 "ffdhe8192" 00:15:16.984 ] 00:15:16.984 } 00:15:16.984 }, 00:15:16.984 { 00:15:16.984 "method": "bdev_nvme_attach_controller", 00:15:16.984 "params": { 00:15:16.984 "name": "TLSTEST", 00:15:16.984 "trtype": "TCP", 00:15:16.984 "adrfam": "IPv4", 00:15:16.984 "traddr": "10.0.0.2", 00:15:16.984 "trsvcid": "4420", 00:15:16.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.984 "prchk_reftag": false, 00:15:16.984 "prchk_guard": false, 00:15:16.985 "ctrlr_loss_timeout_sec": 0, 00:15:16.985 "reconnect_delay_sec": 0, 00:15:16.985 "fast_io_fail_timeout_sec": 0, 00:15:16.985 "psk": "/tmp/tmp.jAm0VxVmIz", 00:15:16.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.985 "hdgst": false, 00:15:16.985 "ddgst": false 00:15:16.985 } 00:15:16.985 }, 00:15:16.985 { 00:15:16.985 "method": "bdev_nvme_set_hotplug", 00:15:16.985 "params": { 00:15:16.985 "period_us": 100000, 00:15:16.985 "enable": false 00:15:16.985 } 00:15:16.985 }, 00:15:16.985 { 00:15:16.985 "method": "bdev_wait_for_examine" 00:15:16.985 } 00:15:16.985 ] 00:15:16.985 }, 00:15:16.985 { 00:15:16.985 "subsystem": "nbd", 00:15:16.985 "config": [] 00:15:16.985 } 00:15:16.985 ] 00:15:16.985 }' 00:15:16.985 12:43:15 -- target/tls.sh@199 -- # killprocess 1186289 00:15:16.985 12:43:15 -- common/autotest_common.sh@936 -- # '[' -z 1186289 ']' 00:15:16.985 12:43:15 -- common/autotest_common.sh@940 -- # kill -0 1186289 00:15:16.985 12:43:15 -- common/autotest_common.sh@941 -- # uname 00:15:16.985 12:43:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.985 12:43:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1186289 00:15:16.985 12:43:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:16.985 12:43:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:16.985 12:43:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1186289' 00:15:16.985 killing process with pid 1186289 00:15:16.985 12:43:15 -- common/autotest_common.sh@955 -- # kill 1186289 00:15:16.985 Received shutdown signal, test time was about 10.000000 seconds 00:15:16.985 00:15:16.985 Latency(us) 00:15:16.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.985 =================================================================================================================== 00:15:16.985 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:16.985 [2024-04-16 12:43:15.873533] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:16.985 12:43:15 -- common/autotest_common.sh@960 -- # wait 1186289 00:15:17.257 12:43:16 -- target/tls.sh@200 -- # killprocess 1186003 00:15:17.257 12:43:16 -- common/autotest_common.sh@936 -- # '[' -z 1186003 ']' 00:15:17.257 12:43:16 -- common/autotest_common.sh@940 -- # kill -0 1186003 00:15:17.257 12:43:16 -- common/autotest_common.sh@941 -- # uname 00:15:17.257 12:43:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:17.257 12:43:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1186003 00:15:17.257 12:43:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:17.257 12:43:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:17.257 12:43:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1186003' 00:15:17.257 killing process with pid 1186003 00:15:17.257 12:43:16 -- common/autotest_common.sh@955 -- # kill 1186003 00:15:17.257 [2024-04-16 12:43:16.168936] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:17.257 12:43:16 -- common/autotest_common.sh@960 -- # wait 1186003 00:15:17.516 12:43:16 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:17.516 12:43:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:17.516 12:43:16 -- target/tls.sh@203 -- # echo '{ 00:15:17.516 "subsystems": [ 00:15:17.516 { 00:15:17.516 "subsystem": "keyring", 00:15:17.516 "config": [] 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "subsystem": "iobuf", 00:15:17.516 "config": [ 00:15:17.516 { 00:15:17.516 "method": "iobuf_set_options", 00:15:17.516 "params": { 00:15:17.516 "small_pool_count": 8192, 00:15:17.516 "large_pool_count": 1024, 00:15:17.516 "small_bufsize": 8192, 00:15:17.516 "large_bufsize": 135168 00:15:17.516 } 00:15:17.516 } 00:15:17.516 ] 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "subsystem": "sock", 00:15:17.516 "config": [ 00:15:17.516 { 00:15:17.516 "method": "sock_impl_set_options", 00:15:17.516 "params": { 00:15:17.516 "impl_name": "posix", 00:15:17.516 "recv_buf_size": 2097152, 00:15:17.516 "send_buf_size": 2097152, 00:15:17.516 "enable_recv_pipe": true, 00:15:17.516 "enable_quickack": false, 00:15:17.516 "enable_placement_id": 0, 00:15:17.516 "enable_zerocopy_send_server": true, 00:15:17.516 "enable_zerocopy_send_client": false, 00:15:17.516 "zerocopy_threshold": 0, 00:15:17.516 "tls_version": 0, 00:15:17.516 "enable_ktls": false 00:15:17.516 } 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "method": "sock_impl_set_options", 00:15:17.516 "params": { 00:15:17.516 "impl_name": "ssl", 00:15:17.516 "recv_buf_size": 4096, 00:15:17.516 "send_buf_size": 4096, 00:15:17.516 "enable_recv_pipe": true, 00:15:17.516 "enable_quickack": false, 00:15:17.516 "enable_placement_id": 0, 00:15:17.516 "enable_zerocopy_send_server": true, 00:15:17.516 "enable_zerocopy_send_client": false, 00:15:17.516 "zerocopy_threshold": 0, 00:15:17.516 "tls_version": 0, 00:15:17.516 "enable_ktls": false 00:15:17.516 } 00:15:17.516 } 00:15:17.516 ] 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "subsystem": "vmd", 00:15:17.516 "config": [] 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "subsystem": "accel", 00:15:17.516 "config": [ 00:15:17.516 { 00:15:17.516 "method": "accel_set_options", 00:15:17.516 "params": { 00:15:17.516 "small_cache_size": 128, 00:15:17.516 "large_cache_size": 16, 00:15:17.516 "task_count": 2048, 00:15:17.516 "sequence_count": 2048, 00:15:17.516 "buf_count": 2048 00:15:17.516 } 00:15:17.516 } 00:15:17.516 ] 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "subsystem": "bdev", 00:15:17.516 "config": [ 00:15:17.516 { 00:15:17.516 "method": "bdev_set_options", 00:15:17.516 "params": { 00:15:17.516 "bdev_io_pool_size": 65535, 00:15:17.516 "bdev_io_cache_size": 256, 00:15:17.516 "bdev_auto_examine": true, 00:15:17.516 "iobuf_small_cache_size": 128, 00:15:17.516 "iobuf_large_cache_size": 16 00:15:17.516 } 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "method": "bdev_raid_set_options", 00:15:17.516 "params": { 00:15:17.516 "process_window_size_kb": 1024 00:15:17.516 } 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "method": "bdev_iscsi_set_options", 00:15:17.516 "params": { 00:15:17.516 "timeout_sec": 30 00:15:17.516 } 00:15:17.516 }, 00:15:17.516 { 00:15:17.516 "method": "bdev_nvme_set_options", 00:15:17.516 "params": { 00:15:17.516 "action_on_timeout": "none", 00:15:17.516 "timeout_us": 0, 00:15:17.516 "timeout_admin_us": 0, 00:15:17.516 "keep_alive_timeout_ms": 10000, 00:15:17.516 "arbitration_burst": 0, 00:15:17.516 "low_priority_weight": 0, 00:15:17.516 "medium_priority_weight": 0, 00:15:17.516 "high_priority_weight": 0, 00:15:17.516 "nvme_adminq_poll_period_us": 10000, 00:15:17.516 "nvme_ioq_poll_period_us": 0, 00:15:17.516 "io_queue_requests": 0, 00:15:17.516 "delay_cmd_submit": true, 00:15:17.516 "transport_retry_count": 4, 00:15:17.516 "bdev_retry_count": 3, 00:15:17.516 "transport_ack_timeout": 0, 00:15:17.516 "ctrlr_loss_timeout_sec": 0, 00:15:17.516 "reconnect_delay_sec": 0, 00:15:17.516 "fast_io_fail_timeout_sec": 0, 00:15:17.517 "disable_auto_failback": false, 00:15:17.517 "generate_uuids": false, 00:15:17.517 "transport_tos": 0, 00:15:17.517 "nvme_error_stat": false, 00:15:17.517 "rdma_srq_size": 0, 00:15:17.517 "io_path_stat": false, 00:15:17.517 "allow_accel_sequence": false, 00:15:17.517 "rdma_max_cq_size": 0, 00:15:17.517 "rdma_cm_event_timeout_ms": 0, 00:15:17.517 "dhchap_digests": [ 00:15:17.517 "sha256", 00:15:17.517 "sha384", 00:15:17.517 "sha512" 00:15:17.517 ], 00:15:17.517 "dhchap_dhgroups": [ 00:15:17.517 "null", 00:15:17.517 "ffdhe2048", 00:15:17.517 "ffdhe3072", 00:15:17.517 "ffdhe4096", 00:15:17.517 "ffdhe6144", 00:15:17.517 "ffdhe8192" 00:15:17.517 ] 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "bdev_nvme_set_hotplug", 00:15:17.517 "params": { 00:15:17.517 "period_us": 100000, 00:15:17.517 "enable": false 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "bdev_malloc_create", 00:15:17.517 "params": { 00:15:17.517 "name": "malloc0", 00:15:17.517 "num_blocks": 8192, 00:15:17.517 "block_size": 4096, 00:15:17.517 "physical_block_size": 4096, 00:15:17.517 "uuid": "409c4a6a-eebd-45ef-813f-9b341c2a207d", 00:15:17.517 "optimal_io_boundary": 0 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "bdev_wait_for_examine" 00:15:17.517 } 00:15:17.517 ] 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "subsystem": "nbd", 00:15:17.517 "config": [] 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "subsystem": "scheduler", 00:15:17.517 "config": [ 00:15:17.517 { 00:15:17.517 "method": "framework_set_scheduler", 00:15:17.517 "params": { 00:15:17.517 "name": "static" 00:15:17.517 } 00:15:17.517 } 00:15:17.517 ] 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "subsystem": "nvmf", 00:15:17.517 "config": [ 00:15:17.517 { 00:15:17.517 "method": "nvmf_set_config", 00:15:17.517 "params": { 00:15:17.517 "discovery_filter": "match_any", 00:15:17.517 "admin_cmd_passthru": { 00:15:17.517 "identify_ctrlr": false 00:15:17.517 } 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "nvmf_set_max_subsystems", 00:15:17.517 "params": { 00:15:17.517 "max_subsystems": 1024 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "nvmf_set_crdt", 00:15:17.517 "params": { 00:15:17.517 "crdt1": 0, 00:15:17.517 "crdt2": 0, 00:15:17.517 "crdt3": 0 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "nvmf_create_transport", 00:15:17.517 "params": { 00:15:17.517 "trtype": "TCP", 00:15:17.517 "max_queue_depth": 128, 00:15:17.517 "max_io_qpairs_per_ctrlr": 127, 00:15:17.517 "in_capsule_data_size": 4096, 00:15:17.517 "max_io_size": 131072, 00:15:17.517 "io_unit_size": 131072, 00:15:17.517 "max_aq_depth": 128, 00:15:17.517 "num_shared_buffers": 511, 00:15:17.517 "buf_cache_size": 4294967295, 00:15:17.517 "dif_insert_or_strip": false, 00:15:17.517 "zcopy": false, 00:15:17.517 "c2h_success": false, 00:15:17.517 "sock_priority": 0, 00:15:17.517 "abort_timeout_sec": 1, 00:15:17.517 "ack_timeout": 0 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "nvmf_create_subsystem", 00:15:17.517 "params": { 00:15:17.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.517 "allow_any_host": false, 00:15:17.517 "serial_number": "SPDK00000000000001", 00:15:17.517 "model_number": "SPDK bdev Controller", 00:15:17.517 "max_namespaces": 10, 00:15:17.517 "min_cntlid": 1, 00:15:17.517 "max_cntlid": 65519, 00:15:17.517 "ana_reporting": false 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "nvmf_subsystem_add_host", 00:15:17.517 "params": { 00:15:17.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.517 "host": "nqn.2016-06.io.spdk:host1", 00:15:17.517 "psk": "/tmp/tmp.jAm0VxVmIz" 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "nvmf_subsystem_add_ns", 00:15:17.517 "params": { 00:15:17.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.517 "namespace": { 00:15:17.517 "nsid": 1, 00:15:17.517 "bdev_name": "malloc0", 00:15:17.517 "nguid": "409C4A6AEEBD45EF813F9B341C2A207D", 00:15:17.517 "uuid": "409c4a6a-eebd-45ef-813f-9b341c2a207d", 00:15:17.517 "no_auto_visible": false 00:15:17.517 } 00:15:17.517 } 00:15:17.517 }, 00:15:17.517 { 00:15:17.517 "method": "nvmf_subsystem_add_listener", 00:15:17.517 "params": { 00:15:17.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.517 "listen_address": { 00:15:17.517 "trtype": "TCP", 00:15:17.517 "adrfam": "IPv4", 00:15:17.517 "traddr": "10.0.0.2", 00:15:17.517 "trsvcid": "4420" 00:15:17.517 }, 00:15:17.517 "secure_channel": true 00:15:17.517 } 00:15:17.517 } 00:15:17.517 ] 00:15:17.517 } 00:15:17.517 ] 00:15:17.517 }' 00:15:17.517 12:43:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.517 12:43:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.517 12:43:16 -- nvmf/common.sh@470 -- # nvmfpid=1186457 00:15:17.517 12:43:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:17.517 12:43:16 -- nvmf/common.sh@471 -- # waitforlisten 1186457 00:15:17.517 12:43:16 -- common/autotest_common.sh@817 -- # '[' -z 1186457 ']' 00:15:17.517 12:43:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.517 12:43:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.517 12:43:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.517 12:43:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.517 12:43:16 -- common/autotest_common.sh@10 -- # set +x 00:15:17.517 [2024-04-16 12:43:16.505973] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:17.517 [2024-04-16 12:43:16.506068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.517 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.517 [2024-04-16 12:43:16.582864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.777 [2024-04-16 12:43:16.688145] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.777 [2024-04-16 12:43:16.688196] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.777 [2024-04-16 12:43:16.688221] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.777 [2024-04-16 12:43:16.688234] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.777 [2024-04-16 12:43:16.688244] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.777 [2024-04-16 12:43:16.688327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.035 [2024-04-16 12:43:16.920992] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.035 [2024-04-16 12:43:16.936955] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:18.035 [2024-04-16 12:43:16.953012] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:18.035 [2024-04-16 12:43:16.963805] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.601 12:43:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.601 12:43:17 -- common/autotest_common.sh@850 -- # return 0 00:15:18.601 12:43:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:18.601 12:43:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:18.601 12:43:17 -- common/autotest_common.sh@10 -- # set +x 00:15:18.601 12:43:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.601 12:43:17 -- target/tls.sh@207 -- # bdevperf_pid=1186604 00:15:18.601 12:43:17 -- target/tls.sh@208 -- # waitforlisten 1186604 /var/tmp/bdevperf.sock 00:15:18.601 12:43:17 -- common/autotest_common.sh@817 -- # '[' -z 1186604 ']' 00:15:18.601 12:43:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.601 12:43:17 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:18.601 12:43:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.601 12:43:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.601 12:43:17 -- target/tls.sh@204 -- # echo '{ 00:15:18.601 "subsystems": [ 00:15:18.601 { 00:15:18.601 "subsystem": "keyring", 00:15:18.601 "config": [] 00:15:18.601 }, 00:15:18.601 { 00:15:18.601 "subsystem": "iobuf", 00:15:18.601 "config": [ 00:15:18.601 { 00:15:18.601 "method": "iobuf_set_options", 00:15:18.601 "params": { 00:15:18.601 "small_pool_count": 8192, 00:15:18.601 "large_pool_count": 1024, 00:15:18.601 "small_bufsize": 8192, 00:15:18.601 "large_bufsize": 135168 00:15:18.601 } 00:15:18.601 } 00:15:18.601 ] 00:15:18.601 }, 00:15:18.601 { 00:15:18.601 "subsystem": "sock", 00:15:18.601 "config": [ 00:15:18.601 { 00:15:18.601 "method": "sock_impl_set_options", 00:15:18.601 "params": { 00:15:18.601 "impl_name": "posix", 00:15:18.601 "recv_buf_size": 2097152, 00:15:18.601 "send_buf_size": 2097152, 00:15:18.601 "enable_recv_pipe": true, 00:15:18.601 "enable_quickack": false, 00:15:18.601 "enable_placement_id": 0, 00:15:18.601 "enable_zerocopy_send_server": true, 00:15:18.601 "enable_zerocopy_send_client": false, 00:15:18.601 "zerocopy_threshold": 0, 00:15:18.601 "tls_version": 0, 00:15:18.601 "enable_ktls": false 00:15:18.601 } 00:15:18.601 }, 00:15:18.601 { 00:15:18.601 "method": "sock_impl_set_options", 00:15:18.601 "params": { 00:15:18.601 "impl_name": "ssl", 00:15:18.601 "recv_buf_size": 4096, 00:15:18.601 "send_buf_size": 4096, 00:15:18.601 "enable_recv_pipe": true, 00:15:18.601 "enable_quickack": false, 00:15:18.601 "enable_placement_id": 0, 00:15:18.601 "enable_zerocopy_send_server": true, 00:15:18.601 "enable_zerocopy_send_client": false, 00:15:18.601 "zerocopy_threshold": 0, 00:15:18.601 "tls_version": 0, 00:15:18.601 "enable_ktls": false 00:15:18.601 } 00:15:18.601 } 00:15:18.601 ] 00:15:18.601 }, 00:15:18.601 { 00:15:18.601 "subsystem": "vmd", 00:15:18.601 "config": [] 00:15:18.601 }, 00:15:18.601 { 00:15:18.601 "subsystem": "accel", 00:15:18.601 "config": [ 00:15:18.602 { 00:15:18.602 "method": "accel_set_options", 00:15:18.602 "params": { 00:15:18.602 "small_cache_size": 128, 00:15:18.602 "large_cache_size": 16, 00:15:18.602 "task_count": 2048, 00:15:18.602 "sequence_count": 2048, 00:15:18.602 "buf_count": 2048 00:15:18.602 } 00:15:18.602 } 00:15:18.602 ] 00:15:18.602 }, 00:15:18.602 { 00:15:18.602 "subsystem": "bdev", 00:15:18.602 "config": [ 00:15:18.602 { 00:15:18.602 "method": "bdev_set_options", 00:15:18.602 "params": { 00:15:18.602 "bdev_io_pool_size": 65535, 00:15:18.602 "bdev_io_cache_size": 256, 00:15:18.602 "bdev_auto_examine": true, 00:15:18.602 "iobuf_small_cache_size": 128, 00:15:18.602 "iobuf_large_cache_size": 16 00:15:18.602 } 00:15:18.602 }, 00:15:18.602 { 00:15:18.602 "method": "bdev_raid_set_options", 00:15:18.602 "params": { 00:15:18.602 "process_window_size_kb": 1024 00:15:18.602 } 00:15:18.602 }, 00:15:18.602 { 00:15:18.602 "method": "bdev_iscsi_set_options", 00:15:18.602 "params": { 00:15:18.602 "timeout_sec": 30 00:15:18.602 } 00:15:18.602 }, 00:15:18.602 { 00:15:18.602 "method": "bdev_nvme_set_options", 00:15:18.602 "params": { 00:15:18.602 "action_on_timeout": "none", 00:15:18.602 "timeout_us": 0, 00:15:18.602 "timeout_admin_us": 0, 00:15:18.602 "keep_alive_timeout_ms": 10000, 00:15:18.602 "arbitration_burst": 0, 00:15:18.602 "low_priority_weight": 0, 00:15:18.602 "medium_priority_weight": 0, 00:15:18.602 "high_priority_weight": 0, 00:15:18.602 "nvme_adminq_poll_period_us": 10000, 00:15:18.602 "nvme_ioq_poll_period_us": 0, 00:15:18.602 "io_queue_requests": 512, 00:15:18.602 "delay_cmd_submit": true, 00:15:18.602 "transport_retry_count": 4, 00:15:18.602 "bdev_retry_count": 3, 00:15:18.602 "transport_ack_timeout": 0, 00:15:18.602 "ctrlr_loss_timeout_sec": 0, 00:15:18.602 "reconnect_delay_sec": 0, 00:15:18.602 "fast_io_fail_timeout_sec": 0, 00:15:18.602 "disable_auto_failback": false, 00:15:18.602 "generate_uuids": false, 00:15:18.602 "transport_tos": 0, 00:15:18.602 "nvme_error_stat": false, 00:15:18.602 "rdma_srq_size": 0, 00:15:18.602 "io_path_stat": false, 00:15:18.602 "allow_accel_sequence": false, 00:15:18.602 "rdma_max_cq_size": 0, 00:15:18.602 "rdma_cm_event_timeout_ms": 0, 00:15:18.602 "dhchap_digests": [ 00:15:18.602 "sha256", 00:15:18.602 "sha384", 00:15:18.602 "sha512" 00:15:18.602 ], 00:15:18.602 "dhchap_dhgroups": [ 00:15:18.602 "null", 00:15:18.602 "ffdhe2048", 00:15:18.602 "ffdhe3072", 00:15:18.602 "ffdhe4096", 00:15:18.602 "ffdhe6144", 00:15:18.602 "ffdhe8192" 00:15:18.602 ] 00:15:18.602 } 00:15:18.602 }, 00:15:18.602 { 00:15:18.602 "method": "bdev_nvme_attach_controller", 00:15:18.602 "params": { 00:15:18.602 "name": "TLSTEST", 00:15:18.602 "trtype": "TCP", 00:15:18.602 "adrfam": "IPv4", 00:15:18.602 "traddr": "10.0.0.2", 00:15:18.602 "trsvcid": "4420", 00:15:18.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.602 "prchk_reftag": false, 00:15:18.602 "prchk_guard": false, 00:15:18.602 "ctrlr_loss_timeout_sec": 0, 00:15:18.602 "reconnect_delay_sec": 0, 00:15:18.602 "fast_io_fail_timeout_sec": 0, 00:15:18.602 "psk": "/tmp/tmp.jAm0VxVmIz", 00:15:18.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.602 "hdgst": false, 00:15:18.602 "ddgst": false 00:15:18.602 } 00:15:18.602 }, 00:15:18.602 { 00:15:18.602 "method": "bdev_nvme_set_hotplug", 00:15:18.602 "params": { 00:15:18.602 "period_us": 100000, 00:15:18.602 "enable": false 00:15:18.602 } 00:15:18.602 }, 00:15:18.602 { 00:15:18.602 "method": "bdev_wait_for_examine" 00:15:18.602 } 00:15:18.602 ] 00:15:18.602 }, 00:15:18.602 { 00:15:18.602 "subsystem": "nbd", 00:15:18.602 "config": [] 00:15:18.602 } 00:15:18.602 ] 00:15:18.602 }' 00:15:18.602 12:43:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.602 12:43:17 -- common/autotest_common.sh@10 -- # set +x 00:15:18.602 [2024-04-16 12:43:17.525806] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:18.602 [2024-04-16 12:43:17.525902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186604 ] 00:15:18.602 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.602 [2024-04-16 12:43:17.594225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.862 [2024-04-16 12:43:17.703462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.862 [2024-04-16 12:43:17.868718] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.862 [2024-04-16 12:43:17.868884] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:19.804 12:43:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.804 12:43:18 -- common/autotest_common.sh@850 -- # return 0 00:15:19.804 12:43:18 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:19.804 Running I/O for 10 seconds... 00:15:29.773 00:15:29.773 Latency(us) 00:15:29.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.773 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:29.773 Verification LBA range: start 0x0 length 0x2000 00:15:29.773 TLSTESTn1 : 10.04 3270.39 12.77 0.00 0.00 39049.31 5582.70 80002.47 00:15:29.773 =================================================================================================================== 00:15:29.773 Total : 3270.39 12.77 0.00 0.00 39049.31 5582.70 80002.47 00:15:29.773 0 00:15:29.773 12:43:28 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.773 12:43:28 -- target/tls.sh@214 -- # killprocess 1186604 00:15:29.773 12:43:28 -- common/autotest_common.sh@936 -- # '[' -z 1186604 ']' 00:15:29.773 12:43:28 -- common/autotest_common.sh@940 -- # kill -0 1186604 00:15:29.773 12:43:28 -- common/autotest_common.sh@941 -- # uname 00:15:29.773 12:43:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.773 12:43:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1186604 00:15:29.773 12:43:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:29.773 12:43:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:29.773 12:43:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1186604' 00:15:29.773 killing process with pid 1186604 00:15:29.773 12:43:28 -- common/autotest_common.sh@955 -- # kill 1186604 00:15:29.773 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.773 00:15:29.773 Latency(us) 00:15:29.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.773 =================================================================================================================== 00:15:29.773 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.773 [2024-04-16 12:43:28.750051] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:29.773 12:43:28 -- common/autotest_common.sh@960 -- # wait 1186604 00:15:30.030 12:43:29 -- target/tls.sh@215 -- # killprocess 1186457 00:15:30.030 12:43:29 -- common/autotest_common.sh@936 -- # '[' -z 1186457 ']' 00:15:30.030 12:43:29 -- common/autotest_common.sh@940 -- # kill -0 1186457 00:15:30.030 12:43:29 -- common/autotest_common.sh@941 -- # uname 00:15:30.030 12:43:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:30.030 12:43:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1186457 00:15:30.030 12:43:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:30.030 12:43:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:30.030 12:43:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1186457' 00:15:30.030 killing process with pid 1186457 00:15:30.030 12:43:29 -- common/autotest_common.sh@955 -- # kill 1186457 00:15:30.031 [2024-04-16 12:43:29.044504] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:30.031 12:43:29 -- common/autotest_common.sh@960 -- # wait 1186457 00:15:30.288 12:43:29 -- target/tls.sh@218 -- # nvmfappstart 00:15:30.288 12:43:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:30.288 12:43:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:30.288 12:43:29 -- common/autotest_common.sh@10 -- # set +x 00:15:30.288 12:43:29 -- nvmf/common.sh@470 -- # nvmfpid=1188058 00:15:30.288 12:43:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:30.288 12:43:29 -- nvmf/common.sh@471 -- # waitforlisten 1188058 00:15:30.288 12:43:29 -- common/autotest_common.sh@817 -- # '[' -z 1188058 ']' 00:15:30.288 12:43:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.288 12:43:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:30.288 12:43:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.288 12:43:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:30.288 12:43:29 -- common/autotest_common.sh@10 -- # set +x 00:15:30.547 [2024-04-16 12:43:29.391466] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:30.547 [2024-04-16 12:43:29.391543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.547 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.547 [2024-04-16 12:43:29.469177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.547 [2024-04-16 12:43:29.582141] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.547 [2024-04-16 12:43:29.582214] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.547 [2024-04-16 12:43:29.582230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.547 [2024-04-16 12:43:29.582244] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.547 [2024-04-16 12:43:29.582255] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.547 [2024-04-16 12:43:29.582289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.481 12:43:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:31.481 12:43:30 -- common/autotest_common.sh@850 -- # return 0 00:15:31.481 12:43:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:31.481 12:43:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:31.481 12:43:30 -- common/autotest_common.sh@10 -- # set +x 00:15:31.481 12:43:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.481 12:43:30 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.jAm0VxVmIz 00:15:31.481 12:43:30 -- target/tls.sh@49 -- # local key=/tmp/tmp.jAm0VxVmIz 00:15:31.481 12:43:30 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:31.739 [2024-04-16 12:43:30.578908] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.739 12:43:30 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:31.996 12:43:30 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:32.254 [2024-04-16 12:43:31.068284] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.254 [2024-04-16 12:43:31.068529] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.254 12:43:31 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:32.254 malloc0 00:15:32.512 12:43:31 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:32.512 12:43:31 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jAm0VxVmIz 00:15:32.770 [2024-04-16 12:43:31.789683] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:32.770 12:43:31 -- target/tls.sh@222 -- # bdevperf_pid=1188345 00:15:32.770 12:43:31 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:32.770 12:43:31 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:32.770 12:43:31 -- target/tls.sh@225 -- # waitforlisten 1188345 /var/tmp/bdevperf.sock 00:15:32.770 12:43:31 -- common/autotest_common.sh@817 -- # '[' -z 1188345 ']' 00:15:32.770 12:43:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:32.770 12:43:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:32.770 12:43:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:32.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:32.770 12:43:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:32.770 12:43:31 -- common/autotest_common.sh@10 -- # set +x 00:15:33.028 [2024-04-16 12:43:31.851821] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:33.028 [2024-04-16 12:43:31.851906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188345 ] 00:15:33.028 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.028 [2024-04-16 12:43:31.924268] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.028 [2024-04-16 12:43:32.037406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.962 12:43:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:33.962 12:43:32 -- common/autotest_common.sh@850 -- # return 0 00:15:33.962 12:43:32 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jAm0VxVmIz 00:15:33.962 12:43:33 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:34.220 [2024-04-16 12:43:33.231806] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:34.481 nvme0n1 00:15:34.482 12:43:33 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:34.482 Running I/O for 1 seconds... 00:15:35.417 00:15:35.417 Latency(us) 00:15:35.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.417 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.417 Verification LBA range: start 0x0 length 0x2000 00:15:35.417 nvme0n1 : 1.03 3077.86 12.02 0.00 0.00 41040.37 8932.31 52040.44 00:15:35.417 =================================================================================================================== 00:15:35.417 Total : 3077.86 12.02 0.00 0.00 41040.37 8932.31 52040.44 00:15:35.417 0 00:15:35.417 12:43:34 -- target/tls.sh@234 -- # killprocess 1188345 00:15:35.417 12:43:34 -- common/autotest_common.sh@936 -- # '[' -z 1188345 ']' 00:15:35.417 12:43:34 -- common/autotest_common.sh@940 -- # kill -0 1188345 00:15:35.417 12:43:34 -- common/autotest_common.sh@941 -- # uname 00:15:35.417 12:43:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.417 12:43:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1188345 00:15:35.675 12:43:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:35.675 12:43:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:35.675 12:43:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1188345' 00:15:35.675 killing process with pid 1188345 00:15:35.675 12:43:34 -- common/autotest_common.sh@955 -- # kill 1188345 00:15:35.675 Received shutdown signal, test time was about 1.000000 seconds 00:15:35.675 00:15:35.675 Latency(us) 00:15:35.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.675 =================================================================================================================== 00:15:35.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:35.675 12:43:34 -- common/autotest_common.sh@960 -- # wait 1188345 00:15:35.933 12:43:34 -- target/tls.sh@235 -- # killprocess 1188058 00:15:35.933 12:43:34 -- common/autotest_common.sh@936 -- # '[' -z 1188058 ']' 00:15:35.933 12:43:34 -- common/autotest_common.sh@940 -- # kill -0 1188058 00:15:35.933 12:43:34 -- common/autotest_common.sh@941 -- # uname 00:15:35.933 12:43:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.933 12:43:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1188058 00:15:35.933 12:43:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:35.933 12:43:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:35.933 12:43:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1188058' 00:15:35.933 killing process with pid 1188058 00:15:35.933 12:43:34 -- common/autotest_common.sh@955 -- # kill 1188058 00:15:35.933 [2024-04-16 12:43:34.803002] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:35.933 12:43:34 -- common/autotest_common.sh@960 -- # wait 1188058 00:15:36.191 12:43:35 -- target/tls.sh@238 -- # nvmfappstart 00:15:36.191 12:43:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:36.191 12:43:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:36.191 12:43:35 -- common/autotest_common.sh@10 -- # set +x 00:15:36.191 12:43:35 -- nvmf/common.sh@470 -- # nvmfpid=1188760 00:15:36.191 12:43:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:36.191 12:43:35 -- nvmf/common.sh@471 -- # waitforlisten 1188760 00:15:36.191 12:43:35 -- common/autotest_common.sh@817 -- # '[' -z 1188760 ']' 00:15:36.191 12:43:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.191 12:43:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.191 12:43:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.191 12:43:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.191 12:43:35 -- common/autotest_common.sh@10 -- # set +x 00:15:36.191 [2024-04-16 12:43:35.155105] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:36.191 [2024-04-16 12:43:35.155195] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.191 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.191 [2024-04-16 12:43:35.233717] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.449 [2024-04-16 12:43:35.344323] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.449 [2024-04-16 12:43:35.344404] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.449 [2024-04-16 12:43:35.344431] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.449 [2024-04-16 12:43:35.344446] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.449 [2024-04-16 12:43:35.344458] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.449 [2024-04-16 12:43:35.344491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.383 12:43:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.383 12:43:36 -- common/autotest_common.sh@850 -- # return 0 00:15:37.383 12:43:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:37.383 12:43:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:37.383 12:43:36 -- common/autotest_common.sh@10 -- # set +x 00:15:37.383 12:43:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.383 12:43:36 -- target/tls.sh@239 -- # rpc_cmd 00:15:37.383 12:43:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.383 12:43:36 -- common/autotest_common.sh@10 -- # set +x 00:15:37.383 [2024-04-16 12:43:36.128018] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.383 malloc0 00:15:37.383 [2024-04-16 12:43:36.160658] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:37.383 [2024-04-16 12:43:36.160928] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.383 12:43:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.383 12:43:36 -- target/tls.sh@252 -- # bdevperf_pid=1188912 00:15:37.383 12:43:36 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:37.383 12:43:36 -- target/tls.sh@254 -- # waitforlisten 1188912 /var/tmp/bdevperf.sock 00:15:37.383 12:43:36 -- common/autotest_common.sh@817 -- # '[' -z 1188912 ']' 00:15:37.383 12:43:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.383 12:43:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:37.383 12:43:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.383 12:43:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:37.383 12:43:36 -- common/autotest_common.sh@10 -- # set +x 00:15:37.383 [2024-04-16 12:43:36.230324] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:37.383 [2024-04-16 12:43:36.230387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188912 ] 00:15:37.383 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.383 [2024-04-16 12:43:36.305316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.383 [2024-04-16 12:43:36.418980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.642 12:43:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.642 12:43:36 -- common/autotest_common.sh@850 -- # return 0 00:15:37.642 12:43:36 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jAm0VxVmIz 00:15:37.900 12:43:36 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:38.158 [2024-04-16 12:43:37.103227] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.158 nvme0n1 00:15:38.158 12:43:37 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.416 Running I/O for 1 seconds... 00:15:39.349 00:15:39.349 Latency(us) 00:15:39.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.349 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.349 Verification LBA range: start 0x0 length 0x2000 00:15:39.349 nvme0n1 : 1.03 3023.50 11.81 0.00 0.00 41729.04 11699.39 75730.49 00:15:39.349 =================================================================================================================== 00:15:39.349 Total : 3023.50 11.81 0.00 0.00 41729.04 11699.39 75730.49 00:15:39.349 0 00:15:39.350 12:43:38 -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:39.350 12:43:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.350 12:43:38 -- common/autotest_common.sh@10 -- # set +x 00:15:39.607 12:43:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.607 12:43:38 -- target/tls.sh@263 -- # tgtcfg='{ 00:15:39.607 "subsystems": [ 00:15:39.607 { 00:15:39.607 "subsystem": "keyring", 00:15:39.607 "config": [ 00:15:39.607 { 00:15:39.607 "method": "keyring_file_add_key", 00:15:39.607 "params": { 00:15:39.607 "name": "key0", 00:15:39.607 "path": "/tmp/tmp.jAm0VxVmIz" 00:15:39.607 } 00:15:39.607 } 00:15:39.607 ] 00:15:39.607 }, 00:15:39.607 { 00:15:39.607 "subsystem": "iobuf", 00:15:39.607 "config": [ 00:15:39.607 { 00:15:39.607 "method": "iobuf_set_options", 00:15:39.607 "params": { 00:15:39.607 "small_pool_count": 8192, 00:15:39.607 "large_pool_count": 1024, 00:15:39.607 "small_bufsize": 8192, 00:15:39.607 "large_bufsize": 135168 00:15:39.607 } 00:15:39.607 } 00:15:39.607 ] 00:15:39.607 }, 00:15:39.607 { 00:15:39.607 "subsystem": "sock", 00:15:39.607 "config": [ 00:15:39.607 { 00:15:39.607 "method": "sock_impl_set_options", 00:15:39.607 "params": { 00:15:39.607 "impl_name": "posix", 00:15:39.607 "recv_buf_size": 2097152, 00:15:39.607 "send_buf_size": 2097152, 00:15:39.607 "enable_recv_pipe": true, 00:15:39.607 "enable_quickack": false, 00:15:39.607 "enable_placement_id": 0, 00:15:39.607 "enable_zerocopy_send_server": true, 00:15:39.607 "enable_zerocopy_send_client": false, 00:15:39.607 "zerocopy_threshold": 0, 00:15:39.607 "tls_version": 0, 00:15:39.607 "enable_ktls": false 00:15:39.607 } 00:15:39.607 }, 00:15:39.607 { 00:15:39.607 "method": "sock_impl_set_options", 00:15:39.607 "params": { 00:15:39.607 "impl_name": "ssl", 00:15:39.607 "recv_buf_size": 4096, 00:15:39.607 "send_buf_size": 4096, 00:15:39.607 "enable_recv_pipe": true, 00:15:39.607 "enable_quickack": false, 00:15:39.607 "enable_placement_id": 0, 00:15:39.607 "enable_zerocopy_send_server": true, 00:15:39.607 "enable_zerocopy_send_client": false, 00:15:39.608 "zerocopy_threshold": 0, 00:15:39.608 "tls_version": 0, 00:15:39.608 "enable_ktls": false 00:15:39.608 } 00:15:39.608 } 00:15:39.608 ] 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "subsystem": "vmd", 00:15:39.608 "config": [] 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "subsystem": "accel", 00:15:39.608 "config": [ 00:15:39.608 { 00:15:39.608 "method": "accel_set_options", 00:15:39.608 "params": { 00:15:39.608 "small_cache_size": 128, 00:15:39.608 "large_cache_size": 16, 00:15:39.608 "task_count": 2048, 00:15:39.608 "sequence_count": 2048, 00:15:39.608 "buf_count": 2048 00:15:39.608 } 00:15:39.608 } 00:15:39.608 ] 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "subsystem": "bdev", 00:15:39.608 "config": [ 00:15:39.608 { 00:15:39.608 "method": "bdev_set_options", 00:15:39.608 "params": { 00:15:39.608 "bdev_io_pool_size": 65535, 00:15:39.608 "bdev_io_cache_size": 256, 00:15:39.608 "bdev_auto_examine": true, 00:15:39.608 "iobuf_small_cache_size": 128, 00:15:39.608 "iobuf_large_cache_size": 16 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "bdev_raid_set_options", 00:15:39.608 "params": { 00:15:39.608 "process_window_size_kb": 1024 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "bdev_iscsi_set_options", 00:15:39.608 "params": { 00:15:39.608 "timeout_sec": 30 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "bdev_nvme_set_options", 00:15:39.608 "params": { 00:15:39.608 "action_on_timeout": "none", 00:15:39.608 "timeout_us": 0, 00:15:39.608 "timeout_admin_us": 0, 00:15:39.608 "keep_alive_timeout_ms": 10000, 00:15:39.608 "arbitration_burst": 0, 00:15:39.608 "low_priority_weight": 0, 00:15:39.608 "medium_priority_weight": 0, 00:15:39.608 "high_priority_weight": 0, 00:15:39.608 "nvme_adminq_poll_period_us": 10000, 00:15:39.608 "nvme_ioq_poll_period_us": 0, 00:15:39.608 "io_queue_requests": 0, 00:15:39.608 "delay_cmd_submit": true, 00:15:39.608 "transport_retry_count": 4, 00:15:39.608 "bdev_retry_count": 3, 00:15:39.608 "transport_ack_timeout": 0, 00:15:39.608 "ctrlr_loss_timeout_sec": 0, 00:15:39.608 "reconnect_delay_sec": 0, 00:15:39.608 "fast_io_fail_timeout_sec": 0, 00:15:39.608 "disable_auto_failback": false, 00:15:39.608 "generate_uuids": false, 00:15:39.608 "transport_tos": 0, 00:15:39.608 "nvme_error_stat": false, 00:15:39.608 "rdma_srq_size": 0, 00:15:39.608 "io_path_stat": false, 00:15:39.608 "allow_accel_sequence": false, 00:15:39.608 "rdma_max_cq_size": 0, 00:15:39.608 "rdma_cm_event_timeout_ms": 0, 00:15:39.608 "dhchap_digests": [ 00:15:39.608 "sha256", 00:15:39.608 "sha384", 00:15:39.608 "sha512" 00:15:39.608 ], 00:15:39.608 "dhchap_dhgroups": [ 00:15:39.608 "null", 00:15:39.608 "ffdhe2048", 00:15:39.608 "ffdhe3072", 00:15:39.608 "ffdhe4096", 00:15:39.608 "ffdhe6144", 00:15:39.608 "ffdhe8192" 00:15:39.608 ] 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "bdev_nvme_set_hotplug", 00:15:39.608 "params": { 00:15:39.608 "period_us": 100000, 00:15:39.608 "enable": false 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "bdev_malloc_create", 00:15:39.608 "params": { 00:15:39.608 "name": "malloc0", 00:15:39.608 "num_blocks": 8192, 00:15:39.608 "block_size": 4096, 00:15:39.608 "physical_block_size": 4096, 00:15:39.608 "uuid": "74e9de11-687a-411b-bf73-cff8cc09036f", 00:15:39.608 "optimal_io_boundary": 0 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "bdev_wait_for_examine" 00:15:39.608 } 00:15:39.608 ] 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "subsystem": "nbd", 00:15:39.608 "config": [] 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "subsystem": "scheduler", 00:15:39.608 "config": [ 00:15:39.608 { 00:15:39.608 "method": "framework_set_scheduler", 00:15:39.608 "params": { 00:15:39.608 "name": "static" 00:15:39.608 } 00:15:39.608 } 00:15:39.608 ] 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "subsystem": "nvmf", 00:15:39.608 "config": [ 00:15:39.608 { 00:15:39.608 "method": "nvmf_set_config", 00:15:39.608 "params": { 00:15:39.608 "discovery_filter": "match_any", 00:15:39.608 "admin_cmd_passthru": { 00:15:39.608 "identify_ctrlr": false 00:15:39.608 } 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "nvmf_set_max_subsystems", 00:15:39.608 "params": { 00:15:39.608 "max_subsystems": 1024 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "nvmf_set_crdt", 00:15:39.608 "params": { 00:15:39.608 "crdt1": 0, 00:15:39.608 "crdt2": 0, 00:15:39.608 "crdt3": 0 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "nvmf_create_transport", 00:15:39.608 "params": { 00:15:39.608 "trtype": "TCP", 00:15:39.608 "max_queue_depth": 128, 00:15:39.608 "max_io_qpairs_per_ctrlr": 127, 00:15:39.608 "in_capsule_data_size": 4096, 00:15:39.608 "max_io_size": 131072, 00:15:39.608 "io_unit_size": 131072, 00:15:39.608 "max_aq_depth": 128, 00:15:39.608 "num_shared_buffers": 511, 00:15:39.608 "buf_cache_size": 4294967295, 00:15:39.608 "dif_insert_or_strip": false, 00:15:39.608 "zcopy": false, 00:15:39.608 "c2h_success": false, 00:15:39.608 "sock_priority": 0, 00:15:39.608 "abort_timeout_sec": 1, 00:15:39.608 "ack_timeout": 0 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "nvmf_create_subsystem", 00:15:39.608 "params": { 00:15:39.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.608 "allow_any_host": false, 00:15:39.608 "serial_number": "00000000000000000000", 00:15:39.608 "model_number": "SPDK bdev Controller", 00:15:39.608 "max_namespaces": 32, 00:15:39.608 "min_cntlid": 1, 00:15:39.608 "max_cntlid": 65519, 00:15:39.608 "ana_reporting": false 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "nvmf_subsystem_add_host", 00:15:39.608 "params": { 00:15:39.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.608 "host": "nqn.2016-06.io.spdk:host1", 00:15:39.608 "psk": "key0" 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "nvmf_subsystem_add_ns", 00:15:39.608 "params": { 00:15:39.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.608 "namespace": { 00:15:39.608 "nsid": 1, 00:15:39.608 "bdev_name": "malloc0", 00:15:39.608 "nguid": "74E9DE11687A411BBF73CFF8CC09036F", 00:15:39.608 "uuid": "74e9de11-687a-411b-bf73-cff8cc09036f", 00:15:39.608 "no_auto_visible": false 00:15:39.608 } 00:15:39.608 } 00:15:39.608 }, 00:15:39.608 { 00:15:39.608 "method": "nvmf_subsystem_add_listener", 00:15:39.608 "params": { 00:15:39.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.608 "listen_address": { 00:15:39.608 "trtype": "TCP", 00:15:39.608 "adrfam": "IPv4", 00:15:39.608 "traddr": "10.0.0.2", 00:15:39.608 "trsvcid": "4420" 00:15:39.608 }, 00:15:39.608 "secure_channel": true 00:15:39.608 } 00:15:39.608 } 00:15:39.608 ] 00:15:39.608 } 00:15:39.608 ] 00:15:39.608 }' 00:15:39.608 12:43:38 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:39.867 12:43:38 -- target/tls.sh@264 -- # bperfcfg='{ 00:15:39.867 "subsystems": [ 00:15:39.867 { 00:15:39.867 "subsystem": "keyring", 00:15:39.867 "config": [ 00:15:39.867 { 00:15:39.867 "method": "keyring_file_add_key", 00:15:39.867 "params": { 00:15:39.867 "name": "key0", 00:15:39.867 "path": "/tmp/tmp.jAm0VxVmIz" 00:15:39.867 } 00:15:39.867 } 00:15:39.867 ] 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "subsystem": "iobuf", 00:15:39.867 "config": [ 00:15:39.867 { 00:15:39.867 "method": "iobuf_set_options", 00:15:39.867 "params": { 00:15:39.867 "small_pool_count": 8192, 00:15:39.867 "large_pool_count": 1024, 00:15:39.867 "small_bufsize": 8192, 00:15:39.867 "large_bufsize": 135168 00:15:39.867 } 00:15:39.867 } 00:15:39.867 ] 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "subsystem": "sock", 00:15:39.867 "config": [ 00:15:39.867 { 00:15:39.867 "method": "sock_impl_set_options", 00:15:39.867 "params": { 00:15:39.867 "impl_name": "posix", 00:15:39.867 "recv_buf_size": 2097152, 00:15:39.867 "send_buf_size": 2097152, 00:15:39.867 "enable_recv_pipe": true, 00:15:39.867 "enable_quickack": false, 00:15:39.867 "enable_placement_id": 0, 00:15:39.867 "enable_zerocopy_send_server": true, 00:15:39.867 "enable_zerocopy_send_client": false, 00:15:39.867 "zerocopy_threshold": 0, 00:15:39.867 "tls_version": 0, 00:15:39.867 "enable_ktls": false 00:15:39.867 } 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "method": "sock_impl_set_options", 00:15:39.867 "params": { 00:15:39.867 "impl_name": "ssl", 00:15:39.867 "recv_buf_size": 4096, 00:15:39.867 "send_buf_size": 4096, 00:15:39.867 "enable_recv_pipe": true, 00:15:39.867 "enable_quickack": false, 00:15:39.867 "enable_placement_id": 0, 00:15:39.867 "enable_zerocopy_send_server": true, 00:15:39.867 "enable_zerocopy_send_client": false, 00:15:39.867 "zerocopy_threshold": 0, 00:15:39.867 "tls_version": 0, 00:15:39.867 "enable_ktls": false 00:15:39.867 } 00:15:39.867 } 00:15:39.867 ] 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "subsystem": "vmd", 00:15:39.867 "config": [] 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "subsystem": "accel", 00:15:39.867 "config": [ 00:15:39.867 { 00:15:39.867 "method": "accel_set_options", 00:15:39.867 "params": { 00:15:39.867 "small_cache_size": 128, 00:15:39.867 "large_cache_size": 16, 00:15:39.867 "task_count": 2048, 00:15:39.867 "sequence_count": 2048, 00:15:39.867 "buf_count": 2048 00:15:39.867 } 00:15:39.867 } 00:15:39.867 ] 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "subsystem": "bdev", 00:15:39.867 "config": [ 00:15:39.867 { 00:15:39.867 "method": "bdev_set_options", 00:15:39.867 "params": { 00:15:39.867 "bdev_io_pool_size": 65535, 00:15:39.867 "bdev_io_cache_size": 256, 00:15:39.867 "bdev_auto_examine": true, 00:15:39.867 "iobuf_small_cache_size": 128, 00:15:39.867 "iobuf_large_cache_size": 16 00:15:39.867 } 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "method": "bdev_raid_set_options", 00:15:39.867 "params": { 00:15:39.867 "process_window_size_kb": 1024 00:15:39.867 } 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "method": "bdev_iscsi_set_options", 00:15:39.867 "params": { 00:15:39.867 "timeout_sec": 30 00:15:39.867 } 00:15:39.867 }, 00:15:39.867 { 00:15:39.867 "method": "bdev_nvme_set_options", 00:15:39.867 "params": { 00:15:39.867 "action_on_timeout": "none", 00:15:39.867 "timeout_us": 0, 00:15:39.867 "timeout_admin_us": 0, 00:15:39.867 "keep_alive_timeout_ms": 10000, 00:15:39.867 "arbitration_burst": 0, 00:15:39.867 "low_priority_weight": 0, 00:15:39.867 "medium_priority_weight": 0, 00:15:39.867 "high_priority_weight": 0, 00:15:39.867 "nvme_adminq_poll_period_us": 10000, 00:15:39.867 "nvme_ioq_poll_period_us": 0, 00:15:39.867 "io_queue_requests": 512, 00:15:39.867 "delay_cmd_submit": true, 00:15:39.867 "transport_retry_count": 4, 00:15:39.867 "bdev_retry_count": 3, 00:15:39.867 "transport_ack_timeout": 0, 00:15:39.867 "ctrlr_loss_timeout_sec": 0, 00:15:39.867 "reconnect_delay_sec": 0, 00:15:39.867 "fast_io_fail_timeout_sec": 0, 00:15:39.867 "disable_auto_failback": false, 00:15:39.868 "generate_uuids": false, 00:15:39.868 "transport_tos": 0, 00:15:39.868 "nvme_error_stat": false, 00:15:39.868 "rdma_srq_size": 0, 00:15:39.868 "io_path_stat": false, 00:15:39.868 "allow_accel_sequence": false, 00:15:39.868 "rdma_max_cq_size": 0, 00:15:39.868 "rdma_cm_event_timeout_ms": 0, 00:15:39.868 "dhchap_digests": [ 00:15:39.868 "sha256", 00:15:39.868 "sha384", 00:15:39.868 "sha512" 00:15:39.868 ], 00:15:39.868 "dhchap_dhgroups": [ 00:15:39.868 "null", 00:15:39.868 "ffdhe2048", 00:15:39.868 "ffdhe3072", 00:15:39.868 "ffdhe4096", 00:15:39.868 "ffdhe6144", 00:15:39.868 "ffdhe8192" 00:15:39.868 ] 00:15:39.868 } 00:15:39.868 }, 00:15:39.868 { 00:15:39.868 "method": "bdev_nvme_attach_controller", 00:15:39.868 "params": { 00:15:39.868 "name": "nvme0", 00:15:39.868 "trtype": "TCP", 00:15:39.868 "adrfam": "IPv4", 00:15:39.868 "traddr": "10.0.0.2", 00:15:39.868 "trsvcid": "4420", 00:15:39.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.868 "prchk_reftag": false, 00:15:39.868 "prchk_guard": false, 00:15:39.868 "ctrlr_loss_timeout_sec": 0, 00:15:39.868 "reconnect_delay_sec": 0, 00:15:39.868 "fast_io_fail_timeout_sec": 0, 00:15:39.868 "psk": "key0", 00:15:39.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.868 "hdgst": false, 00:15:39.868 "ddgst": false 00:15:39.868 } 00:15:39.868 }, 00:15:39.868 { 00:15:39.868 "method": "bdev_nvme_set_hotplug", 00:15:39.868 "params": { 00:15:39.868 "period_us": 100000, 00:15:39.868 "enable": false 00:15:39.868 } 00:15:39.868 }, 00:15:39.868 { 00:15:39.868 "method": "bdev_enable_histogram", 00:15:39.868 "params": { 00:15:39.868 "name": "nvme0n1", 00:15:39.868 "enable": true 00:15:39.868 } 00:15:39.868 }, 00:15:39.868 { 00:15:39.868 "method": "bdev_wait_for_examine" 00:15:39.868 } 00:15:39.868 ] 00:15:39.868 }, 00:15:39.868 { 00:15:39.868 "subsystem": "nbd", 00:15:39.868 "config": [] 00:15:39.868 } 00:15:39.868 ] 00:15:39.868 }' 00:15:39.868 12:43:38 -- target/tls.sh@266 -- # killprocess 1188912 00:15:39.868 12:43:38 -- common/autotest_common.sh@936 -- # '[' -z 1188912 ']' 00:15:39.868 12:43:38 -- common/autotest_common.sh@940 -- # kill -0 1188912 00:15:39.868 12:43:38 -- common/autotest_common.sh@941 -- # uname 00:15:39.868 12:43:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.868 12:43:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1188912 00:15:39.868 12:43:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:39.868 12:43:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:39.868 12:43:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1188912' 00:15:39.868 killing process with pid 1188912 00:15:39.868 12:43:38 -- common/autotest_common.sh@955 -- # kill 1188912 00:15:39.868 Received shutdown signal, test time was about 1.000000 seconds 00:15:39.868 00:15:39.868 Latency(us) 00:15:39.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.868 =================================================================================================================== 00:15:39.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.868 12:43:38 -- common/autotest_common.sh@960 -- # wait 1188912 00:15:40.126 12:43:39 -- target/tls.sh@267 -- # killprocess 1188760 00:15:40.126 12:43:39 -- common/autotest_common.sh@936 -- # '[' -z 1188760 ']' 00:15:40.126 12:43:39 -- common/autotest_common.sh@940 -- # kill -0 1188760 00:15:40.126 12:43:39 -- common/autotest_common.sh@941 -- # uname 00:15:40.126 12:43:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:40.126 12:43:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1188760 00:15:40.126 12:43:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:40.126 12:43:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:40.126 12:43:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1188760' 00:15:40.126 killing process with pid 1188760 00:15:40.126 12:43:39 -- common/autotest_common.sh@955 -- # kill 1188760 00:15:40.126 12:43:39 -- common/autotest_common.sh@960 -- # wait 1188760 00:15:40.385 12:43:39 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:40.385 12:43:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:40.385 12:43:39 -- target/tls.sh@269 -- # echo '{ 00:15:40.385 "subsystems": [ 00:15:40.385 { 00:15:40.385 "subsystem": "keyring", 00:15:40.385 "config": [ 00:15:40.385 { 00:15:40.385 "method": "keyring_file_add_key", 00:15:40.385 "params": { 00:15:40.385 "name": "key0", 00:15:40.385 "path": "/tmp/tmp.jAm0VxVmIz" 00:15:40.385 } 00:15:40.385 } 00:15:40.385 ] 00:15:40.385 }, 00:15:40.385 { 00:15:40.385 "subsystem": "iobuf", 00:15:40.385 "config": [ 00:15:40.385 { 00:15:40.385 "method": "iobuf_set_options", 00:15:40.385 "params": { 00:15:40.385 "small_pool_count": 8192, 00:15:40.385 "large_pool_count": 1024, 00:15:40.385 "small_bufsize": 8192, 00:15:40.385 "large_bufsize": 135168 00:15:40.385 } 00:15:40.385 } 00:15:40.385 ] 00:15:40.385 }, 00:15:40.385 { 00:15:40.385 "subsystem": "sock", 00:15:40.385 "config": [ 00:15:40.385 { 00:15:40.385 "method": "sock_impl_set_options", 00:15:40.385 "params": { 00:15:40.385 "impl_name": "posix", 00:15:40.385 "recv_buf_size": 2097152, 00:15:40.385 "send_buf_size": 2097152, 00:15:40.385 "enable_recv_pipe": true, 00:15:40.385 "enable_quickack": false, 00:15:40.385 "enable_placement_id": 0, 00:15:40.385 "enable_zerocopy_send_server": true, 00:15:40.385 "enable_zerocopy_send_client": false, 00:15:40.385 "zerocopy_threshold": 0, 00:15:40.385 "tls_version": 0, 00:15:40.385 "enable_ktls": false 00:15:40.385 } 00:15:40.385 }, 00:15:40.385 { 00:15:40.385 "method": "sock_impl_set_options", 00:15:40.385 "params": { 00:15:40.385 "impl_name": "ssl", 00:15:40.385 "recv_buf_size": 4096, 00:15:40.385 "send_buf_size": 4096, 00:15:40.386 "enable_recv_pipe": true, 00:15:40.386 "enable_quickack": false, 00:15:40.386 "enable_placement_id": 0, 00:15:40.386 "enable_zerocopy_send_server": true, 00:15:40.386 "enable_zerocopy_send_client": false, 00:15:40.386 "zerocopy_threshold": 0, 00:15:40.386 "tls_version": 0, 00:15:40.386 "enable_ktls": false 00:15:40.386 } 00:15:40.386 } 00:15:40.386 ] 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "subsystem": "vmd", 00:15:40.386 "config": [] 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "subsystem": "accel", 00:15:40.386 "config": [ 00:15:40.386 { 00:15:40.386 "method": "accel_set_options", 00:15:40.386 "params": { 00:15:40.386 "small_cache_size": 128, 00:15:40.386 "large_cache_size": 16, 00:15:40.386 "task_count": 2048, 00:15:40.386 "sequence_count": 2048, 00:15:40.386 "buf_count": 2048 00:15:40.386 } 00:15:40.386 } 00:15:40.386 ] 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "subsystem": "bdev", 00:15:40.386 "config": [ 00:15:40.386 { 00:15:40.386 "method": "bdev_set_options", 00:15:40.386 "params": { 00:15:40.386 "bdev_io_pool_size": 65535, 00:15:40.386 "bdev_io_cache_size": 256, 00:15:40.386 "bdev_auto_examine": true, 00:15:40.386 "iobuf_small_cache_size": 128, 00:15:40.386 "iobuf_large_cache_size": 16 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "bdev_raid_set_options", 00:15:40.386 "params": { 00:15:40.386 "process_window_size_kb": 1024 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "bdev_iscsi_set_options", 00:15:40.386 "params": { 00:15:40.386 "timeout_sec": 30 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "bdev_nvme_set_options", 00:15:40.386 "params": { 00:15:40.386 "action_on_timeout": "none", 00:15:40.386 "timeout_us": 0, 00:15:40.386 "timeout_admin_us": 0, 00:15:40.386 "keep_alive_timeout_ms": 10000, 00:15:40.386 "arbitration_burst": 0, 00:15:40.386 "low_priority_weight": 0, 00:15:40.386 "medium_priority_weight": 0, 00:15:40.386 "high_priority_weight": 0, 00:15:40.386 "nvme_adminq_poll_period_us": 10000, 00:15:40.386 "nvme_ioq_poll_period_us": 0, 00:15:40.386 "io_queue_requests": 0, 00:15:40.386 "delay_cmd_submit": true, 00:15:40.386 "transport_retry_count": 4, 00:15:40.386 "bdev_retry_count": 3, 00:15:40.386 "transport_ack_timeout": 0, 00:15:40.386 "ctrlr_loss_timeout_sec": 0, 00:15:40.386 "reconnect_delay_sec": 0, 00:15:40.386 "fast_io_fail_timeout_sec": 0, 00:15:40.386 "disable_auto_failback": false, 00:15:40.386 "generate_uuids": false, 00:15:40.386 "transport_tos": 0, 00:15:40.386 "nvme_error_stat": false, 00:15:40.386 "rdma_srq_size": 0, 00:15:40.386 "io_path_stat": false, 00:15:40.386 "allow_accel_sequence": false, 00:15:40.386 "rdma_max_cq_size": 0, 00:15:40.386 "rdma_cm_event_timeout_ms": 0, 00:15:40.386 "dhchap_digests": [ 00:15:40.386 "sha256", 00:15:40.386 "sha384", 00:15:40.386 "sha512" 00:15:40.386 ], 00:15:40.386 "dhchap_dhgroups": [ 00:15:40.386 "null", 00:15:40.386 "ffdhe2048", 00:15:40.386 "ffdhe3072", 00:15:40.386 "ffdhe4096", 00:15:40.386 "ffdhe6144", 00:15:40.386 "ffdhe8192" 00:15:40.386 ] 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "bdev_nvme_set_hotplug", 00:15:40.386 "params": { 00:15:40.386 "period_us": 100000, 00:15:40.386 "enable": false 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "bdev_malloc_create", 00:15:40.386 "params": { 00:15:40.386 "name": "malloc0", 00:15:40.386 "num_blocks": 8192, 00:15:40.386 "block_size": 4096, 00:15:40.386 "physical_block_size": 4096, 00:15:40.386 "uuid": "74e9de11-687a-411b-bf73-cff8cc09036f", 00:15:40.386 "optimal_io_boundary": 0 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "bdev_wait_for_examine" 00:15:40.386 } 00:15:40.386 ] 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "subsystem": "nbd", 00:15:40.386 "config": [] 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "subsystem": "scheduler", 00:15:40.386 "config": [ 00:15:40.386 { 00:15:40.386 "method": "framework_set_scheduler", 00:15:40.386 "params": { 00:15:40.386 "name": "static" 00:15:40.386 } 00:15:40.386 } 00:15:40.386 ] 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "subsystem": "nvmf", 00:15:40.386 "config": [ 00:15:40.386 { 00:15:40.386 "method": "nvmf_set_config", 00:15:40.386 "params": { 00:15:40.386 "discovery_filter": "match_any", 00:15:40.386 "admin_cmd_passthru": { 00:15:40.386 "identify_ctrlr": false 00:15:40.386 } 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "nvmf_set_max_subsystems", 00:15:40.386 "params": { 00:15:40.386 "max_subsystems": 1024 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "nvmf_set_crdt", 00:15:40.386 "params": { 00:15:40.386 "crdt1": 0, 00:15:40.386 "crdt2": 0, 00:15:40.386 "crdt3": 0 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "nvmf_create_transport", 00:15:40.386 "params": { 00:15:40.386 "trtype": "TCP", 00:15:40.386 "max_queue_depth": 128, 00:15:40.386 "max_io_qpairs_per_ctrlr": 127, 00:15:40.386 "in_capsule_data_size": 4096, 00:15:40.386 "max_io_size": 131072, 00:15:40.386 "io_unit_size": 131072, 00:15:40.386 "max_aq_depth": 128, 00:15:40.386 "num_shared_buffers": 511, 00:15:40.386 "buf_cache_size": 4294967295, 00:15:40.386 "dif_insert_or_strip": false, 00:15:40.386 "zcopy": false, 00:15:40.386 "c2h_success": false, 00:15:40.386 "sock_priority": 0, 00:15:40.386 "abort_timeout_sec": 1, 00:15:40.386 "ack_timeout": 0 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "nvmf_create_subsystem", 00:15:40.386 "params": { 00:15:40.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.386 "allow_any_host": false, 00:15:40.386 "serial_number": "00000000000000000000", 00:15:40.386 "model_number": "SPDK bdev Controller", 00:15:40.386 "max_namespaces": 32, 00:15:40.386 "min_cntlid": 1, 00:15:40.386 "max_cntlid": 65519, 00:15:40.386 "ana_reporting": false 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "nvmf_subsystem_add_host", 00:15:40.386 "params": { 00:15:40.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.386 "host": "nqn.2016-06.io.spdk:host1", 00:15:40.386 "psk": "key0" 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "nvmf_subsystem_add_ns", 00:15:40.386 "params": { 00:15:40.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.386 "namespace": { 00:15:40.386 "nsid": 1, 00:15:40.386 "bdev_name": "malloc0", 00:15:40.386 "nguid": "74E9DE11687A411BBF73CFF8CC09036F", 00:15:40.386 "uuid": "74e9de11-687a-411b-bf73-cff8cc09036f", 00:15:40.386 "no_auto_visible": false 00:15:40.386 } 00:15:40.386 } 00:15:40.386 }, 00:15:40.386 { 00:15:40.386 "method": "nvmf_subsystem_add_listener", 00:15:40.386 "params": { 00:15:40.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.386 "listen_address": { 00:15:40.386 "trtype": "TCP", 00:15:40.386 "adrfam": "IPv4", 00:15:40.386 "traddr": "10.0.0.2", 00:15:40.386 "trsvcid": "4420" 00:15:40.386 }, 00:15:40.386 "secure_channel": true 00:15:40.386 } 00:15:40.386 } 00:15:40.386 ] 00:15:40.386 } 00:15:40.386 ] 00:15:40.386 }' 00:15:40.386 12:43:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:40.386 12:43:39 -- common/autotest_common.sh@10 -- # set +x 00:15:40.386 12:43:39 -- nvmf/common.sh@470 -- # nvmfpid=1189318 00:15:40.386 12:43:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:40.386 12:43:39 -- nvmf/common.sh@471 -- # waitforlisten 1189318 00:15:40.386 12:43:39 -- common/autotest_common.sh@817 -- # '[' -z 1189318 ']' 00:15:40.386 12:43:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.386 12:43:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:40.386 12:43:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.386 12:43:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:40.386 12:43:39 -- common/autotest_common.sh@10 -- # set +x 00:15:40.386 [2024-04-16 12:43:39.412705] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:40.386 [2024-04-16 12:43:39.412800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.386 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.644 [2024-04-16 12:43:39.493416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.644 [2024-04-16 12:43:39.606424] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.644 [2024-04-16 12:43:39.606504] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.644 [2024-04-16 12:43:39.606521] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.644 [2024-04-16 12:43:39.606535] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.644 [2024-04-16 12:43:39.606546] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.644 [2024-04-16 12:43:39.606667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.903 [2024-04-16 12:43:39.846439] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.903 [2024-04-16 12:43:39.878456] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:40.903 [2024-04-16 12:43:39.889807] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.470 12:43:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:41.470 12:43:40 -- common/autotest_common.sh@850 -- # return 0 00:15:41.470 12:43:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:41.470 12:43:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:41.470 12:43:40 -- common/autotest_common.sh@10 -- # set +x 00:15:41.470 12:43:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.470 12:43:40 -- target/tls.sh@272 -- # bdevperf_pid=1189461 00:15:41.470 12:43:40 -- target/tls.sh@273 -- # waitforlisten 1189461 /var/tmp/bdevperf.sock 00:15:41.470 12:43:40 -- common/autotest_common.sh@817 -- # '[' -z 1189461 ']' 00:15:41.470 12:43:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.470 12:43:40 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:41.470 12:43:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:41.470 12:43:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.470 12:43:40 -- target/tls.sh@270 -- # echo '{ 00:15:41.470 "subsystems": [ 00:15:41.470 { 00:15:41.470 "subsystem": "keyring", 00:15:41.470 "config": [ 00:15:41.470 { 00:15:41.470 "method": "keyring_file_add_key", 00:15:41.470 "params": { 00:15:41.470 "name": "key0", 00:15:41.470 "path": "/tmp/tmp.jAm0VxVmIz" 00:15:41.470 } 00:15:41.470 } 00:15:41.470 ] 00:15:41.470 }, 00:15:41.470 { 00:15:41.470 "subsystem": "iobuf", 00:15:41.470 "config": [ 00:15:41.470 { 00:15:41.470 "method": "iobuf_set_options", 00:15:41.470 "params": { 00:15:41.470 "small_pool_count": 8192, 00:15:41.470 "large_pool_count": 1024, 00:15:41.470 "small_bufsize": 8192, 00:15:41.470 "large_bufsize": 135168 00:15:41.470 } 00:15:41.470 } 00:15:41.470 ] 00:15:41.470 }, 00:15:41.470 { 00:15:41.470 "subsystem": "sock", 00:15:41.470 "config": [ 00:15:41.470 { 00:15:41.470 "method": "sock_impl_set_options", 00:15:41.470 "params": { 00:15:41.470 "impl_name": "posix", 00:15:41.470 "recv_buf_size": 2097152, 00:15:41.470 "send_buf_size": 2097152, 00:15:41.470 "enable_recv_pipe": true, 00:15:41.470 "enable_quickack": false, 00:15:41.470 "enable_placement_id": 0, 00:15:41.470 "enable_zerocopy_send_server": true, 00:15:41.470 "enable_zerocopy_send_client": false, 00:15:41.470 "zerocopy_threshold": 0, 00:15:41.470 "tls_version": 0, 00:15:41.470 "enable_ktls": false 00:15:41.470 } 00:15:41.470 }, 00:15:41.470 { 00:15:41.470 "method": "sock_impl_set_options", 00:15:41.470 "params": { 00:15:41.470 "impl_name": "ssl", 00:15:41.470 "recv_buf_size": 4096, 00:15:41.470 "send_buf_size": 4096, 00:15:41.470 "enable_recv_pipe": true, 00:15:41.470 "enable_quickack": false, 00:15:41.470 "enable_placement_id": 0, 00:15:41.470 "enable_zerocopy_send_server": true, 00:15:41.470 "enable_zerocopy_send_client": false, 00:15:41.470 "zerocopy_threshold": 0, 00:15:41.470 "tls_version": 0, 00:15:41.470 "enable_ktls": false 00:15:41.470 } 00:15:41.470 } 00:15:41.470 ] 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "subsystem": "vmd", 00:15:41.471 "config": [] 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "subsystem": "accel", 00:15:41.471 "config": [ 00:15:41.471 { 00:15:41.471 "method": "accel_set_options", 00:15:41.471 "params": { 00:15:41.471 "small_cache_size": 128, 00:15:41.471 "large_cache_size": 16, 00:15:41.471 "task_count": 2048, 00:15:41.471 "sequence_count": 2048, 00:15:41.471 "buf_count": 2048 00:15:41.471 } 00:15:41.471 } 00:15:41.471 ] 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "subsystem": "bdev", 00:15:41.471 "config": [ 00:15:41.471 { 00:15:41.471 "method": "bdev_set_options", 00:15:41.471 "params": { 00:15:41.471 "bdev_io_pool_size": 65535, 00:15:41.471 "bdev_io_cache_size": 256, 00:15:41.471 "bdev_auto_examine": true, 00:15:41.471 "iobuf_small_cache_size": 128, 00:15:41.471 "iobuf_large_cache_size": 16 00:15:41.471 } 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "method": "bdev_raid_set_options", 00:15:41.471 "params": { 00:15:41.471 "process_window_size_kb": 1024 00:15:41.471 } 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "method": "bdev_iscsi_set_options", 00:15:41.471 "params": { 00:15:41.471 "timeout_sec": 30 00:15:41.471 } 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "method": "bdev_nvme_set_options", 00:15:41.471 "params": { 00:15:41.471 "action_on_timeout": "none", 00:15:41.471 "timeout_us": 0, 00:15:41.471 "timeout_admin_us": 0, 00:15:41.471 "keep_alive_timeout_ms": 10000, 00:15:41.471 "arbitration_burst": 0, 00:15:41.471 "low_priority_weight": 0, 00:15:41.471 "medium_priority_weight": 0, 00:15:41.471 "high_priority_weight": 0, 00:15:41.471 "nvme_adminq_poll_period_us": 10000, 00:15:41.471 "nvme_ioq_poll_period_us": 0, 00:15:41.471 "io_queue_requests": 512, 00:15:41.471 "delay_cmd_submit": true, 00:15:41.471 "transport_retry_count": 4, 00:15:41.471 "bdev_retry_count": 3, 00:15:41.471 "transport_ack_timeout": 0, 00:15:41.471 "ctrlr_loss_timeout_sec": 0, 00:15:41.471 "reconnect_delay_sec": 0, 00:15:41.471 "fast_io_fail_timeout_sec": 0, 00:15:41.471 "disable_auto_failback": false, 00:15:41.471 "generate_uuids": false, 00:15:41.471 "transport_tos": 0, 00:15:41.471 "nvme_error_stat": false, 00:15:41.471 "rdma_srq_size": 0, 00:15:41.471 "io_path_stat": false, 00:15:41.471 "allow_accel_sequence": false, 00:15:41.471 "rdma_max_cq_size": 0, 00:15:41.471 "rdma_cm_event_timeout_ms": 0, 00:15:41.471 "dhchap_digests": [ 00:15:41.471 "sha256", 00:15:41.471 "sha384", 00:15:41.471 "sha512" 00:15:41.471 ], 00:15:41.471 "dhchap_dhgroups": [ 00:15:41.471 "null", 00:15:41.471 "ffdhe2048", 00:15:41.471 "ffdhe3072", 00:15:41.471 "ffdhe4096", 00:15:41.471 "ffdhe6144", 00:15:41.471 "ffdhe8192" 00:15:41.471 ] 00:15:41.471 } 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "method": "bdev_nvme_attach_controller", 00:15:41.471 "params": { 00:15:41.471 "name": "nvme0", 00:15:41.471 "trtype": "TCP", 00:15:41.471 "adrfam": "IPv4", 00:15:41.471 "traddr": "10.0.0.2", 00:15:41.471 "trsvcid": "4420", 00:15:41.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.471 "prchk_reftag": false, 00:15:41.471 "prchk_guard": false, 00:15:41.471 "ctrlr_loss_timeout_sec": 0, 00:15:41.471 "reconnect_delay_sec": 0, 00:15:41.471 "fast_io_fail_timeout_sec": 0, 00:15:41.471 "psk": "key0", 00:15:41.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.471 "hdgst": false, 00:15:41.471 "ddgst": false 00:15:41.471 } 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "method": "bdev_nvme_set_hotplug", 00:15:41.471 "params": { 00:15:41.471 "period_us": 100000, 00:15:41.471 "enable": false 00:15:41.471 } 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "method": "bdev_enable_histogram", 00:15:41.471 "params": { 00:15:41.471 "name": "nvme0n1", 00:15:41.471 "enable": true 00:15:41.471 } 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "method": "bdev_wait_for_examine" 00:15:41.471 } 00:15:41.471 ] 00:15:41.471 }, 00:15:41.471 { 00:15:41.471 "subsystem": "nbd", 00:15:41.471 "config": [] 00:15:41.471 } 00:15:41.471 ] 00:15:41.471 }' 00:15:41.471 12:43:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:41.471 12:43:40 -- common/autotest_common.sh@10 -- # set +x 00:15:41.471 [2024-04-16 12:43:40.448937] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:41.471 [2024-04-16 12:43:40.449024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189461 ] 00:15:41.471 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.471 [2024-04-16 12:43:40.520667] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.731 [2024-04-16 12:43:40.637972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.989 [2024-04-16 12:43:40.819248] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:42.556 12:43:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:42.556 12:43:41 -- common/autotest_common.sh@850 -- # return 0 00:15:42.556 12:43:41 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:42.556 12:43:41 -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:42.849 12:43:41 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.849 12:43:41 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:42.849 Running I/O for 1 seconds... 00:15:43.786 00:15:43.786 Latency(us) 00:15:43.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.786 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:43.786 Verification LBA range: start 0x0 length 0x2000 00:15:43.786 nvme0n1 : 1.03 2748.27 10.74 0.00 0.00 45915.99 11359.57 94371.84 00:15:43.786 =================================================================================================================== 00:15:43.786 Total : 2748.27 10.74 0.00 0.00 45915.99 11359.57 94371.84 00:15:43.786 0 00:15:43.786 12:43:42 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:43.786 12:43:42 -- target/tls.sh@279 -- # cleanup 00:15:43.786 12:43:42 -- target/tls.sh@15 -- # process_shm --id 0 00:15:43.786 12:43:42 -- common/autotest_common.sh@794 -- # type=--id 00:15:43.786 12:43:42 -- common/autotest_common.sh@795 -- # id=0 00:15:43.786 12:43:42 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:43.786 12:43:42 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:43.786 12:43:42 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:43.786 12:43:42 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:43.786 12:43:42 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:43.786 12:43:42 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:43.786 nvmf_trace.0 00:15:44.044 12:43:42 -- common/autotest_common.sh@809 -- # return 0 00:15:44.044 12:43:42 -- target/tls.sh@16 -- # killprocess 1189461 00:15:44.044 12:43:42 -- common/autotest_common.sh@936 -- # '[' -z 1189461 ']' 00:15:44.044 12:43:42 -- common/autotest_common.sh@940 -- # kill -0 1189461 00:15:44.044 12:43:42 -- common/autotest_common.sh@941 -- # uname 00:15:44.044 12:43:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.044 12:43:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1189461 00:15:44.044 12:43:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:44.044 12:43:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:44.044 12:43:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1189461' 00:15:44.044 killing process with pid 1189461 00:15:44.045 12:43:42 -- common/autotest_common.sh@955 -- # kill 1189461 00:15:44.045 Received shutdown signal, test time was about 1.000000 seconds 00:15:44.045 00:15:44.045 Latency(us) 00:15:44.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.045 =================================================================================================================== 00:15:44.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:44.045 12:43:42 -- common/autotest_common.sh@960 -- # wait 1189461 00:15:44.303 12:43:43 -- target/tls.sh@17 -- # nvmftestfini 00:15:44.303 12:43:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:44.303 12:43:43 -- nvmf/common.sh@117 -- # sync 00:15:44.303 12:43:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.303 12:43:43 -- nvmf/common.sh@120 -- # set +e 00:15:44.303 12:43:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.303 12:43:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.303 rmmod nvme_tcp 00:15:44.303 rmmod nvme_fabrics 00:15:44.303 rmmod nvme_keyring 00:15:44.303 12:43:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.303 12:43:43 -- nvmf/common.sh@124 -- # set -e 00:15:44.303 12:43:43 -- nvmf/common.sh@125 -- # return 0 00:15:44.303 12:43:43 -- nvmf/common.sh@478 -- # '[' -n 1189318 ']' 00:15:44.303 12:43:43 -- nvmf/common.sh@479 -- # killprocess 1189318 00:15:44.303 12:43:43 -- common/autotest_common.sh@936 -- # '[' -z 1189318 ']' 00:15:44.303 12:43:43 -- common/autotest_common.sh@940 -- # kill -0 1189318 00:15:44.303 12:43:43 -- common/autotest_common.sh@941 -- # uname 00:15:44.303 12:43:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.303 12:43:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1189318 00:15:44.303 12:43:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.303 12:43:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.303 12:43:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1189318' 00:15:44.303 killing process with pid 1189318 00:15:44.303 12:43:43 -- common/autotest_common.sh@955 -- # kill 1189318 00:15:44.303 12:43:43 -- common/autotest_common.sh@960 -- # wait 1189318 00:15:44.562 12:43:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:44.562 12:43:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:44.562 12:43:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:44.562 12:43:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.562 12:43:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.562 12:43:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.562 12:43:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.562 12:43:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.094 12:43:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.094 12:43:45 -- target/tls.sh@18 -- # rm -f /tmp/tmp.T8CamQrp4N /tmp/tmp.X0Tnjdad5p /tmp/tmp.jAm0VxVmIz 00:15:47.094 00:15:47.094 real 1m23.424s 00:15:47.094 user 2m8.215s 00:15:47.094 sys 0m32.114s 00:15:47.094 12:43:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:47.094 12:43:45 -- common/autotest_common.sh@10 -- # set +x 00:15:47.094 ************************************ 00:15:47.094 END TEST nvmf_tls 00:15:47.094 ************************************ 00:15:47.094 12:43:45 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.094 12:43:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:47.094 12:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.094 12:43:45 -- common/autotest_common.sh@10 -- # set +x 00:15:47.094 ************************************ 00:15:47.094 START TEST nvmf_fips 00:15:47.094 ************************************ 00:15:47.094 12:43:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.094 * Looking for test storage... 00:15:47.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:47.095 12:43:45 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.095 12:43:45 -- nvmf/common.sh@7 -- # uname -s 00:15:47.095 12:43:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.095 12:43:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.095 12:43:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.095 12:43:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.095 12:43:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.095 12:43:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.095 12:43:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.095 12:43:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.095 12:43:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.095 12:43:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.095 12:43:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:47.095 12:43:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:47.095 12:43:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.095 12:43:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.095 12:43:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.095 12:43:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.095 12:43:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.095 12:43:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.095 12:43:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.095 12:43:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.095 12:43:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.095 12:43:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.095 12:43:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.095 12:43:45 -- paths/export.sh@5 -- # export PATH 00:15:47.095 12:43:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.095 12:43:45 -- nvmf/common.sh@47 -- # : 0 00:15:47.095 12:43:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.095 12:43:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.095 12:43:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.095 12:43:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.095 12:43:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.095 12:43:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.095 12:43:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.095 12:43:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.095 12:43:45 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.095 12:43:45 -- fips/fips.sh@89 -- # check_openssl_version 00:15:47.095 12:43:45 -- fips/fips.sh@83 -- # local target=3.0.0 00:15:47.095 12:43:45 -- fips/fips.sh@85 -- # openssl version 00:15:47.095 12:43:45 -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:47.095 12:43:45 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:47.095 12:43:45 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:47.095 12:43:45 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:47.095 12:43:45 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:47.095 12:43:45 -- scripts/common.sh@333 -- # IFS=.-: 00:15:47.095 12:43:45 -- scripts/common.sh@333 -- # read -ra ver1 00:15:47.095 12:43:45 -- scripts/common.sh@334 -- # IFS=.-: 00:15:47.095 12:43:45 -- scripts/common.sh@334 -- # read -ra ver2 00:15:47.095 12:43:45 -- scripts/common.sh@335 -- # local 'op=>=' 00:15:47.095 12:43:45 -- scripts/common.sh@337 -- # ver1_l=3 00:15:47.095 12:43:45 -- scripts/common.sh@338 -- # ver2_l=3 00:15:47.095 12:43:45 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:47.095 12:43:45 -- scripts/common.sh@341 -- # case "$op" in 00:15:47.095 12:43:45 -- scripts/common.sh@345 -- # : 1 00:15:47.095 12:43:45 -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:47.095 12:43:45 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.095 12:43:45 -- scripts/common.sh@362 -- # decimal 3 00:15:47.095 12:43:45 -- scripts/common.sh@350 -- # local d=3 00:15:47.095 12:43:45 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:47.095 12:43:45 -- scripts/common.sh@352 -- # echo 3 00:15:47.095 12:43:45 -- scripts/common.sh@362 -- # ver1[v]=3 00:15:47.095 12:43:45 -- scripts/common.sh@363 -- # decimal 3 00:15:47.095 12:43:45 -- scripts/common.sh@350 -- # local d=3 00:15:47.095 12:43:45 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:47.095 12:43:45 -- scripts/common.sh@352 -- # echo 3 00:15:47.095 12:43:45 -- scripts/common.sh@363 -- # ver2[v]=3 00:15:47.095 12:43:45 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.095 12:43:45 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:47.095 12:43:45 -- scripts/common.sh@361 -- # (( v++ )) 00:15:47.095 12:43:45 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.095 12:43:45 -- scripts/common.sh@362 -- # decimal 0 00:15:47.095 12:43:45 -- scripts/common.sh@350 -- # local d=0 00:15:47.095 12:43:45 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.095 12:43:45 -- scripts/common.sh@352 -- # echo 0 00:15:47.095 12:43:45 -- scripts/common.sh@362 -- # ver1[v]=0 00:15:47.095 12:43:45 -- scripts/common.sh@363 -- # decimal 0 00:15:47.095 12:43:45 -- scripts/common.sh@350 -- # local d=0 00:15:47.095 12:43:45 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.095 12:43:45 -- scripts/common.sh@352 -- # echo 0 00:15:47.095 12:43:45 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:47.095 12:43:45 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.095 12:43:45 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:47.095 12:43:45 -- scripts/common.sh@361 -- # (( v++ )) 00:15:47.095 12:43:45 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.095 12:43:45 -- scripts/common.sh@362 -- # decimal 9 00:15:47.095 12:43:45 -- scripts/common.sh@350 -- # local d=9 00:15:47.095 12:43:45 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:47.095 12:43:45 -- scripts/common.sh@352 -- # echo 9 00:15:47.095 12:43:45 -- scripts/common.sh@362 -- # ver1[v]=9 00:15:47.095 12:43:45 -- scripts/common.sh@363 -- # decimal 0 00:15:47.095 12:43:45 -- scripts/common.sh@350 -- # local d=0 00:15:47.095 12:43:45 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.095 12:43:45 -- scripts/common.sh@352 -- # echo 0 00:15:47.095 12:43:45 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:47.095 12:43:45 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.095 12:43:45 -- scripts/common.sh@364 -- # return 0 00:15:47.095 12:43:45 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:47.095 12:43:45 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:47.095 12:43:45 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:47.095 12:43:45 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:47.095 12:43:45 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:47.095 12:43:45 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:47.095 12:43:45 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:47.095 12:43:45 -- fips/fips.sh@113 -- # build_openssl_config 00:15:47.095 12:43:45 -- fips/fips.sh@37 -- # cat 00:15:47.095 12:43:45 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:47.095 12:43:45 -- fips/fips.sh@58 -- # cat - 00:15:47.095 12:43:45 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:47.095 12:43:45 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:47.095 12:43:45 -- fips/fips.sh@116 -- # mapfile -t providers 00:15:47.095 12:43:45 -- fips/fips.sh@116 -- # openssl list -providers 00:15:47.095 12:43:45 -- fips/fips.sh@116 -- # grep name 00:15:47.095 12:43:45 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:47.095 12:43:45 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:47.095 12:43:45 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:47.095 12:43:45 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:47.095 12:43:45 -- fips/fips.sh@127 -- # : 00:15:47.096 12:43:45 -- common/autotest_common.sh@638 -- # local es=0 00:15:47.096 12:43:45 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:47.096 12:43:45 -- common/autotest_common.sh@626 -- # local arg=openssl 00:15:47.096 12:43:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.096 12:43:45 -- common/autotest_common.sh@630 -- # type -t openssl 00:15:47.096 12:43:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.096 12:43:45 -- common/autotest_common.sh@632 -- # type -P openssl 00:15:47.096 12:43:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.096 12:43:45 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:15:47.096 12:43:45 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:15:47.096 12:43:45 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:15:47.096 Error setting digest 00:15:47.096 00025AF50A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:47.096 00025AF50A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:47.096 12:43:45 -- common/autotest_common.sh@641 -- # es=1 00:15:47.096 12:43:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:47.096 12:43:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:47.096 12:43:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:47.096 12:43:45 -- fips/fips.sh@130 -- # nvmftestinit 00:15:47.096 12:43:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:47.096 12:43:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.096 12:43:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:47.096 12:43:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:47.096 12:43:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:47.096 12:43:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.096 12:43:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.096 12:43:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.096 12:43:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:47.096 12:43:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:47.096 12:43:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.096 12:43:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.626 12:43:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:49.626 12:43:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.626 12:43:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.626 12:43:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.626 12:43:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.626 12:43:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.626 12:43:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.626 12:43:48 -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.626 12:43:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.626 12:43:48 -- nvmf/common.sh@296 -- # e810=() 00:15:49.626 12:43:48 -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.626 12:43:48 -- nvmf/common.sh@297 -- # x722=() 00:15:49.626 12:43:48 -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.626 12:43:48 -- nvmf/common.sh@298 -- # mlx=() 00:15:49.626 12:43:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.626 12:43:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.626 12:43:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.626 12:43:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.627 12:43:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.627 12:43:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.627 12:43:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.627 12:43:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.627 12:43:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:15:49.627 Found 0000:82:00.0 (0x8086 - 0x159b) 00:15:49.627 12:43:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.627 12:43:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:15:49.627 Found 0000:82:00.1 (0x8086 - 0x159b) 00:15:49.627 12:43:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.627 12:43:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.627 12:43:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.627 12:43:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:49.627 12:43:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.627 12:43:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:15:49.627 Found net devices under 0000:82:00.0: cvl_0_0 00:15:49.627 12:43:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.627 12:43:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.627 12:43:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.627 12:43:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:49.627 12:43:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.627 12:43:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:15:49.627 Found net devices under 0000:82:00.1: cvl_0_1 00:15:49.627 12:43:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.627 12:43:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:49.627 12:43:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:49.627 12:43:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:49.627 12:43:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.627 12:43:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.627 12:43:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.627 12:43:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.627 12:43:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.627 12:43:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.627 12:43:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.627 12:43:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.627 12:43:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.627 12:43:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.627 12:43:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.627 12:43:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.627 12:43:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.627 12:43:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.627 12:43:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.627 12:43:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.627 12:43:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.627 12:43:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.627 12:43:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.627 12:43:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:15:49.627 00:15:49.627 --- 10.0.0.2 ping statistics --- 00:15:49.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.627 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:15:49.627 12:43:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:15:49.627 00:15:49.627 --- 10.0.0.1 ping statistics --- 00:15:49.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.627 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:49.627 12:43:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.627 12:43:48 -- nvmf/common.sh@411 -- # return 0 00:15:49.627 12:43:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:49.627 12:43:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.627 12:43:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:49.627 12:43:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.627 12:43:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:49.627 12:43:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:49.627 12:43:48 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:49.627 12:43:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:49.627 12:43:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:49.627 12:43:48 -- common/autotest_common.sh@10 -- # set +x 00:15:49.627 12:43:48 -- nvmf/common.sh@470 -- # nvmfpid=1192131 00:15:49.627 12:43:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.627 12:43:48 -- nvmf/common.sh@471 -- # waitforlisten 1192131 00:15:49.627 12:43:48 -- common/autotest_common.sh@817 -- # '[' -z 1192131 ']' 00:15:49.627 12:43:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.627 12:43:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.627 12:43:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.627 12:43:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.627 12:43:48 -- common/autotest_common.sh@10 -- # set +x 00:15:49.627 [2024-04-16 12:43:48.675162] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:49.627 [2024-04-16 12:43:48.675233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.885 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.885 [2024-04-16 12:43:48.752170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.885 [2024-04-16 12:43:48.856359] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.885 [2024-04-16 12:43:48.856414] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.885 [2024-04-16 12:43:48.856428] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.886 [2024-04-16 12:43:48.856439] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.886 [2024-04-16 12:43:48.856449] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.886 [2024-04-16 12:43:48.856475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.819 12:43:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.819 12:43:49 -- common/autotest_common.sh@850 -- # return 0 00:15:50.819 12:43:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:50.819 12:43:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:50.819 12:43:49 -- common/autotest_common.sh@10 -- # set +x 00:15:50.819 12:43:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.819 12:43:49 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:50.819 12:43:49 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:50.819 12:43:49 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:50.819 12:43:49 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:50.819 12:43:49 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:50.819 12:43:49 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:50.819 12:43:49 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:50.819 12:43:49 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.078 [2024-04-16 12:43:49.893887] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.078 [2024-04-16 12:43:49.909882] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:51.078 [2024-04-16 12:43:49.910152] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.078 [2024-04-16 12:43:49.942423] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:51.078 malloc0 00:15:51.078 12:43:49 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:51.078 12:43:49 -- fips/fips.sh@147 -- # bdevperf_pid=1192288 00:15:51.078 12:43:49 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:51.078 12:43:49 -- fips/fips.sh@148 -- # waitforlisten 1192288 /var/tmp/bdevperf.sock 00:15:51.078 12:43:49 -- common/autotest_common.sh@817 -- # '[' -z 1192288 ']' 00:15:51.078 12:43:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.078 12:43:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:51.078 12:43:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.078 12:43:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:51.078 12:43:49 -- common/autotest_common.sh@10 -- # set +x 00:15:51.078 [2024-04-16 12:43:50.043272] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:15:51.078 [2024-04-16 12:43:50.043394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192288 ] 00:15:51.078 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.078 [2024-04-16 12:43:50.128659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.336 [2024-04-16 12:43:50.248055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.269 12:43:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:52.270 12:43:50 -- common/autotest_common.sh@850 -- # return 0 00:15:52.270 12:43:50 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:52.270 [2024-04-16 12:43:51.218246] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:52.270 [2024-04-16 12:43:51.218374] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:52.270 TLSTESTn1 00:15:52.270 12:43:51 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.527 Running I/O for 10 seconds... 00:16:02.502 00:16:02.502 Latency(us) 00:16:02.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.502 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:02.502 Verification LBA range: start 0x0 length 0x2000 00:16:02.502 TLSTESTn1 : 10.05 2932.95 11.46 0.00 0.00 43553.12 5995.33 92430.03 00:16:02.502 =================================================================================================================== 00:16:02.502 Total : 2932.95 11.46 0.00 0.00 43553.12 5995.33 92430.03 00:16:02.502 0 00:16:02.502 12:44:01 -- fips/fips.sh@1 -- # cleanup 00:16:02.502 12:44:01 -- fips/fips.sh@15 -- # process_shm --id 0 00:16:02.502 12:44:01 -- common/autotest_common.sh@794 -- # type=--id 00:16:02.502 12:44:01 -- common/autotest_common.sh@795 -- # id=0 00:16:02.502 12:44:01 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:02.502 12:44:01 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:02.502 12:44:01 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:02.502 12:44:01 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:02.502 12:44:01 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:02.502 12:44:01 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:02.502 nvmf_trace.0 00:16:02.502 12:44:01 -- common/autotest_common.sh@809 -- # return 0 00:16:02.502 12:44:01 -- fips/fips.sh@16 -- # killprocess 1192288 00:16:02.502 12:44:01 -- common/autotest_common.sh@936 -- # '[' -z 1192288 ']' 00:16:02.502 12:44:01 -- common/autotest_common.sh@940 -- # kill -0 1192288 00:16:02.502 12:44:01 -- common/autotest_common.sh@941 -- # uname 00:16:02.502 12:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:02.502 12:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192288 00:16:02.760 12:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:02.760 12:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:02.760 12:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192288' 00:16:02.760 killing process with pid 1192288 00:16:02.760 12:44:01 -- common/autotest_common.sh@955 -- # kill 1192288 00:16:02.760 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.760 00:16:02.760 Latency(us) 00:16:02.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.760 =================================================================================================================== 00:16:02.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.760 [2024-04-16 12:44:01.592591] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:02.760 12:44:01 -- common/autotest_common.sh@960 -- # wait 1192288 00:16:03.017 12:44:01 -- fips/fips.sh@17 -- # nvmftestfini 00:16:03.017 12:44:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:03.017 12:44:01 -- nvmf/common.sh@117 -- # sync 00:16:03.017 12:44:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.018 12:44:01 -- nvmf/common.sh@120 -- # set +e 00:16:03.018 12:44:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.018 12:44:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.018 rmmod nvme_tcp 00:16:03.018 rmmod nvme_fabrics 00:16:03.018 rmmod nvme_keyring 00:16:03.018 12:44:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.018 12:44:01 -- nvmf/common.sh@124 -- # set -e 00:16:03.018 12:44:01 -- nvmf/common.sh@125 -- # return 0 00:16:03.018 12:44:01 -- nvmf/common.sh@478 -- # '[' -n 1192131 ']' 00:16:03.018 12:44:01 -- nvmf/common.sh@479 -- # killprocess 1192131 00:16:03.018 12:44:01 -- common/autotest_common.sh@936 -- # '[' -z 1192131 ']' 00:16:03.018 12:44:01 -- common/autotest_common.sh@940 -- # kill -0 1192131 00:16:03.018 12:44:01 -- common/autotest_common.sh@941 -- # uname 00:16:03.018 12:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:03.018 12:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192131 00:16:03.018 12:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:03.018 12:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:03.018 12:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192131' 00:16:03.018 killing process with pid 1192131 00:16:03.018 12:44:01 -- common/autotest_common.sh@955 -- # kill 1192131 00:16:03.018 [2024-04-16 12:44:01.947498] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:03.018 12:44:01 -- common/autotest_common.sh@960 -- # wait 1192131 00:16:03.275 12:44:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:03.275 12:44:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:03.275 12:44:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:03.275 12:44:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.275 12:44:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.275 12:44:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.275 12:44:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.275 12:44:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.805 12:44:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.805 12:44:04 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:05.805 00:16:05.805 real 0m18.537s 00:16:05.805 user 0m21.925s 00:16:05.805 sys 0m8.186s 00:16:05.805 12:44:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:05.805 12:44:04 -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 ************************************ 00:16:05.805 END TEST nvmf_fips 00:16:05.805 ************************************ 00:16:05.805 12:44:04 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:16:05.805 12:44:04 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:16:05.805 12:44:04 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:16:05.805 12:44:04 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:16:05.805 12:44:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.805 12:44:04 -- common/autotest_common.sh@10 -- # set +x 00:16:07.704 12:44:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:07.704 12:44:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.704 12:44:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.704 12:44:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.704 12:44:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.704 12:44:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.704 12:44:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.704 12:44:06 -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.704 12:44:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.704 12:44:06 -- nvmf/common.sh@296 -- # e810=() 00:16:07.704 12:44:06 -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.704 12:44:06 -- nvmf/common.sh@297 -- # x722=() 00:16:07.704 12:44:06 -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.704 12:44:06 -- nvmf/common.sh@298 -- # mlx=() 00:16:07.704 12:44:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.704 12:44:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.704 12:44:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.705 12:44:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.705 12:44:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.705 12:44:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.705 12:44:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:16:07.705 Found 0000:82:00.0 (0x8086 - 0x159b) 00:16:07.705 12:44:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.705 12:44:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:16:07.705 Found 0000:82:00.1 (0x8086 - 0x159b) 00:16:07.705 12:44:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.705 12:44:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.705 12:44:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.705 12:44:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.705 12:44:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:07.705 12:44:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.705 12:44:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:16:07.705 Found net devices under 0000:82:00.0: cvl_0_0 00:16:07.705 12:44:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.705 12:44:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.705 12:44:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.705 12:44:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:07.705 12:44:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.705 12:44:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:16:07.705 Found net devices under 0000:82:00.1: cvl_0_1 00:16:07.705 12:44:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.705 12:44:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:07.705 12:44:06 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.705 12:44:06 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:16:07.705 12:44:06 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:07.705 12:44:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:07.705 12:44:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.705 12:44:06 -- common/autotest_common.sh@10 -- # set +x 00:16:07.963 ************************************ 00:16:07.963 START TEST nvmf_perf_adq 00:16:07.963 ************************************ 00:16:07.963 12:44:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:07.963 * Looking for test storage... 00:16:07.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.963 12:44:06 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.963 12:44:06 -- nvmf/common.sh@7 -- # uname -s 00:16:07.963 12:44:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.963 12:44:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.963 12:44:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.963 12:44:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.963 12:44:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.963 12:44:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.963 12:44:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.963 12:44:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.963 12:44:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.963 12:44:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.963 12:44:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:07.963 12:44:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:16:07.963 12:44:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.963 12:44:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.963 12:44:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.963 12:44:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.963 12:44:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.963 12:44:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.963 12:44:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.963 12:44:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.963 12:44:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.963 12:44:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.963 12:44:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.963 12:44:06 -- paths/export.sh@5 -- # export PATH 00:16:07.963 12:44:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.963 12:44:06 -- nvmf/common.sh@47 -- # : 0 00:16:07.963 12:44:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.963 12:44:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.963 12:44:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.963 12:44:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.963 12:44:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.963 12:44:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.963 12:44:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.963 12:44:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.963 12:44:06 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:16:07.963 12:44:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.963 12:44:06 -- common/autotest_common.sh@10 -- # set +x 00:16:10.493 12:44:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:10.493 12:44:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:10.493 12:44:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:10.493 12:44:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:10.493 12:44:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:10.493 12:44:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:10.493 12:44:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:10.493 12:44:09 -- nvmf/common.sh@295 -- # net_devs=() 00:16:10.493 12:44:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:10.493 12:44:09 -- nvmf/common.sh@296 -- # e810=() 00:16:10.493 12:44:09 -- nvmf/common.sh@296 -- # local -ga e810 00:16:10.493 12:44:09 -- nvmf/common.sh@297 -- # x722=() 00:16:10.493 12:44:09 -- nvmf/common.sh@297 -- # local -ga x722 00:16:10.493 12:44:09 -- nvmf/common.sh@298 -- # mlx=() 00:16:10.493 12:44:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:10.493 12:44:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.493 12:44:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:10.493 12:44:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:10.493 12:44:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:10.493 12:44:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.493 12:44:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:16:10.493 Found 0000:82:00.0 (0x8086 - 0x159b) 00:16:10.493 12:44:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.493 12:44:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:16:10.493 Found 0000:82:00.1 (0x8086 - 0x159b) 00:16:10.493 12:44:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:10.493 12:44:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:10.493 12:44:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.493 12:44:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.493 12:44:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:10.493 12:44:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.493 12:44:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:16:10.493 Found net devices under 0000:82:00.0: cvl_0_0 00:16:10.493 12:44:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.493 12:44:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.493 12:44:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.493 12:44:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:10.493 12:44:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.493 12:44:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:16:10.493 Found net devices under 0000:82:00.1: cvl_0_1 00:16:10.493 12:44:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.493 12:44:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:10.493 12:44:09 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.493 12:44:09 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:16:10.493 12:44:09 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:10.493 12:44:09 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:16:10.493 12:44:09 -- target/perf_adq.sh@52 -- # rmmod ice 00:16:11.104 12:44:10 -- target/perf_adq.sh@53 -- # modprobe ice 00:16:13.006 12:44:12 -- target/perf_adq.sh@54 -- # sleep 5 00:16:18.282 12:44:17 -- target/perf_adq.sh@67 -- # nvmftestinit 00:16:18.282 12:44:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:18.282 12:44:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.283 12:44:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:18.283 12:44:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:18.283 12:44:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:18.283 12:44:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.283 12:44:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.283 12:44:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.283 12:44:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:18.283 12:44:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:18.283 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.283 12:44:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:18.283 12:44:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.283 12:44:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.283 12:44:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.283 12:44:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.283 12:44:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.283 12:44:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.283 12:44:17 -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.283 12:44:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.283 12:44:17 -- nvmf/common.sh@296 -- # e810=() 00:16:18.283 12:44:17 -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.283 12:44:17 -- nvmf/common.sh@297 -- # x722=() 00:16:18.283 12:44:17 -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.283 12:44:17 -- nvmf/common.sh@298 -- # mlx=() 00:16:18.283 12:44:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.283 12:44:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.283 12:44:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.283 12:44:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.283 12:44:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.283 12:44:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.283 12:44:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:16:18.283 Found 0000:82:00.0 (0x8086 - 0x159b) 00:16:18.283 12:44:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.283 12:44:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:16:18.283 Found 0000:82:00.1 (0x8086 - 0x159b) 00:16:18.283 12:44:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.283 12:44:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.283 12:44:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.283 12:44:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:18.283 12:44:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.283 12:44:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:16:18.283 Found net devices under 0000:82:00.0: cvl_0_0 00:16:18.283 12:44:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.283 12:44:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.283 12:44:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.283 12:44:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:18.283 12:44:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.283 12:44:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:16:18.283 Found net devices under 0000:82:00.1: cvl_0_1 00:16:18.283 12:44:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.283 12:44:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:18.283 12:44:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:18.283 12:44:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:18.283 12:44:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.283 12:44:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.283 12:44:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.283 12:44:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.283 12:44:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.283 12:44:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.283 12:44:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.283 12:44:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.283 12:44:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.283 12:44:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.283 12:44:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.283 12:44:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.283 12:44:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.283 12:44:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.283 12:44:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.283 12:44:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.283 12:44:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.283 12:44:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.283 12:44:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.283 12:44:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:16:18.283 00:16:18.283 --- 10.0.0.2 ping statistics --- 00:16:18.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.283 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:16:18.283 12:44:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:16:18.283 00:16:18.283 --- 10.0.0.1 ping statistics --- 00:16:18.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.283 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:16:18.283 12:44:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.283 12:44:17 -- nvmf/common.sh@411 -- # return 0 00:16:18.283 12:44:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:18.283 12:44:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.283 12:44:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:18.283 12:44:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.283 12:44:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:18.283 12:44:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:18.283 12:44:17 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:18.283 12:44:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:18.283 12:44:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:18.283 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.283 12:44:17 -- nvmf/common.sh@470 -- # nvmfpid=1198879 00:16:18.283 12:44:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:18.283 12:44:17 -- nvmf/common.sh@471 -- # waitforlisten 1198879 00:16:18.283 12:44:17 -- common/autotest_common.sh@817 -- # '[' -z 1198879 ']' 00:16:18.283 12:44:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.283 12:44:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:18.283 12:44:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.283 12:44:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:18.283 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.283 [2024-04-16 12:44:17.282189] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:16:18.283 [2024-04-16 12:44:17.282292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.283 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.542 [2024-04-16 12:44:17.359173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.542 [2024-04-16 12:44:17.468805] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.542 [2024-04-16 12:44:17.468883] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.542 [2024-04-16 12:44:17.468905] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.542 [2024-04-16 12:44:17.468937] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.542 [2024-04-16 12:44:17.468951] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.542 [2024-04-16 12:44:17.469042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.542 [2024-04-16 12:44:17.469109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.542 [2024-04-16 12:44:17.469176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.542 [2024-04-16 12:44:17.469181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.542 12:44:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:18.542 12:44:17 -- common/autotest_common.sh@850 -- # return 0 00:16:18.542 12:44:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:18.542 12:44:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:18.542 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.542 12:44:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.542 12:44:17 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:16:18.542 12:44:17 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:16:18.542 12:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.542 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.542 12:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.542 12:44:17 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:16:18.542 12:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.542 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 12:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.800 12:44:17 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:16:18.800 12:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.800 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 [2024-04-16 12:44:17.641478] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.800 12:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.800 12:44:17 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:18.800 12:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.800 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 Malloc1 00:16:18.800 12:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.800 12:44:17 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.800 12:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.800 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 12:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.800 12:44:17 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.800 12:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.800 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 12:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.800 12:44:17 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.800 12:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.800 12:44:17 -- common/autotest_common.sh@10 -- # set +x 00:16:18.800 [2024-04-16 12:44:17.694883] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.800 12:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.800 12:44:17 -- target/perf_adq.sh@73 -- # perfpid=1198913 00:16:18.800 12:44:17 -- target/perf_adq.sh@74 -- # sleep 2 00:16:18.800 12:44:17 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:18.800 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.701 12:44:19 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:16:20.701 12:44:19 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:16:20.701 12:44:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.701 12:44:19 -- target/perf_adq.sh@76 -- # wc -l 00:16:20.701 12:44:19 -- common/autotest_common.sh@10 -- # set +x 00:16:20.701 12:44:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.701 12:44:19 -- target/perf_adq.sh@76 -- # count=4 00:16:20.701 12:44:19 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:16:20.701 12:44:19 -- target/perf_adq.sh@81 -- # wait 1198913 00:16:28.821 Initializing NVMe Controllers 00:16:28.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:28.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:28.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:28.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:28.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:28.821 Initialization complete. Launching workers. 00:16:28.821 ======================================================== 00:16:28.821 Latency(us) 00:16:28.821 Device Information : IOPS MiB/s Average min max 00:16:28.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10108.50 39.49 6330.72 2767.21 10404.25 00:16:28.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10344.30 40.41 6187.28 2218.63 8115.46 00:16:28.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10452.30 40.83 6143.56 2309.39 47517.14 00:16:28.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10253.80 40.05 6241.80 1854.86 8871.25 00:16:28.821 ======================================================== 00:16:28.821 Total : 41158.89 160.78 6224.99 1854.86 47517.14 00:16:28.821 00:16:28.821 12:44:27 -- target/perf_adq.sh@82 -- # nvmftestfini 00:16:28.821 12:44:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:28.821 12:44:27 -- nvmf/common.sh@117 -- # sync 00:16:28.821 12:44:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.821 12:44:27 -- nvmf/common.sh@120 -- # set +e 00:16:28.821 12:44:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.821 12:44:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.821 rmmod nvme_tcp 00:16:28.821 rmmod nvme_fabrics 00:16:28.821 rmmod nvme_keyring 00:16:29.080 12:44:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.080 12:44:27 -- nvmf/common.sh@124 -- # set -e 00:16:29.080 12:44:27 -- nvmf/common.sh@125 -- # return 0 00:16:29.080 12:44:27 -- nvmf/common.sh@478 -- # '[' -n 1198879 ']' 00:16:29.080 12:44:27 -- nvmf/common.sh@479 -- # killprocess 1198879 00:16:29.080 12:44:27 -- common/autotest_common.sh@936 -- # '[' -z 1198879 ']' 00:16:29.080 12:44:27 -- common/autotest_common.sh@940 -- # kill -0 1198879 00:16:29.080 12:44:27 -- common/autotest_common.sh@941 -- # uname 00:16:29.080 12:44:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.080 12:44:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1198879 00:16:29.080 12:44:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:29.080 12:44:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:29.080 12:44:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1198879' 00:16:29.080 killing process with pid 1198879 00:16:29.080 12:44:27 -- common/autotest_common.sh@955 -- # kill 1198879 00:16:29.080 12:44:27 -- common/autotest_common.sh@960 -- # wait 1198879 00:16:29.339 12:44:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:29.339 12:44:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:29.339 12:44:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:29.339 12:44:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.339 12:44:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.339 12:44:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.339 12:44:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.339 12:44:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.241 12:44:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.241 12:44:30 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:16:31.241 12:44:30 -- target/perf_adq.sh@52 -- # rmmod ice 00:16:32.175 12:44:30 -- target/perf_adq.sh@53 -- # modprobe ice 00:16:34.075 12:44:32 -- target/perf_adq.sh@54 -- # sleep 5 00:16:39.352 12:44:37 -- target/perf_adq.sh@87 -- # nvmftestinit 00:16:39.352 12:44:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:39.352 12:44:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.352 12:44:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:39.352 12:44:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:39.352 12:44:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:39.352 12:44:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.352 12:44:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.352 12:44:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.352 12:44:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:39.352 12:44:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:39.352 12:44:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.353 12:44:37 -- common/autotest_common.sh@10 -- # set +x 00:16:39.353 12:44:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:39.353 12:44:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:39.353 12:44:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:39.353 12:44:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:39.353 12:44:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:39.353 12:44:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:39.353 12:44:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:39.353 12:44:37 -- nvmf/common.sh@295 -- # net_devs=() 00:16:39.353 12:44:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:39.353 12:44:37 -- nvmf/common.sh@296 -- # e810=() 00:16:39.353 12:44:37 -- nvmf/common.sh@296 -- # local -ga e810 00:16:39.353 12:44:37 -- nvmf/common.sh@297 -- # x722=() 00:16:39.353 12:44:37 -- nvmf/common.sh@297 -- # local -ga x722 00:16:39.353 12:44:37 -- nvmf/common.sh@298 -- # mlx=() 00:16:39.353 12:44:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:39.353 12:44:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.353 12:44:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:39.353 12:44:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:39.353 12:44:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:39.353 12:44:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.353 12:44:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:16:39.353 Found 0000:82:00.0 (0x8086 - 0x159b) 00:16:39.353 12:44:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.353 12:44:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:16:39.353 Found 0000:82:00.1 (0x8086 - 0x159b) 00:16:39.353 12:44:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:39.353 12:44:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.353 12:44:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.353 12:44:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:39.353 12:44:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.353 12:44:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:16:39.353 Found net devices under 0000:82:00.0: cvl_0_0 00:16:39.353 12:44:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.353 12:44:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.353 12:44:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.353 12:44:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:39.353 12:44:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.353 12:44:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:16:39.353 Found net devices under 0000:82:00.1: cvl_0_1 00:16:39.353 12:44:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.353 12:44:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:39.353 12:44:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:39.353 12:44:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:39.353 12:44:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:39.353 12:44:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.353 12:44:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.353 12:44:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.353 12:44:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:39.353 12:44:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.353 12:44:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.353 12:44:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:39.353 12:44:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.353 12:44:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.353 12:44:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:39.353 12:44:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:39.353 12:44:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.353 12:44:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.353 12:44:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.353 12:44:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.353 12:44:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:39.353 12:44:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.353 12:44:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.353 12:44:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.353 12:44:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:39.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:16:39.353 00:16:39.353 --- 10.0.0.2 ping statistics --- 00:16:39.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.353 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:16:39.353 12:44:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:16:39.353 00:16:39.353 --- 10.0.0.1 ping statistics --- 00:16:39.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.353 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:39.353 12:44:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.353 12:44:38 -- nvmf/common.sh@411 -- # return 0 00:16:39.353 12:44:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:39.353 12:44:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.353 12:44:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:39.353 12:44:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:39.353 12:44:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.353 12:44:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:39.353 12:44:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:39.353 12:44:38 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:16:39.353 12:44:38 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:16:39.353 12:44:38 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:16:39.353 12:44:38 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:16:39.353 net.core.busy_poll = 1 00:16:39.353 12:44:38 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:16:39.353 net.core.busy_read = 1 00:16:39.353 12:44:38 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:16:39.353 12:44:38 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:16:39.353 12:44:38 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:16:39.353 12:44:38 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:16:39.353 12:44:38 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:16:39.353 12:44:38 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:39.353 12:44:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:39.353 12:44:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:39.353 12:44:38 -- common/autotest_common.sh@10 -- # set +x 00:16:39.353 12:44:38 -- nvmf/common.sh@470 -- # nvmfpid=1201586 00:16:39.353 12:44:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:39.353 12:44:38 -- nvmf/common.sh@471 -- # waitforlisten 1201586 00:16:39.353 12:44:38 -- common/autotest_common.sh@817 -- # '[' -z 1201586 ']' 00:16:39.353 12:44:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.353 12:44:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:39.353 12:44:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.353 12:44:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:39.353 12:44:38 -- common/autotest_common.sh@10 -- # set +x 00:16:39.353 [2024-04-16 12:44:38.242124] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:16:39.353 [2024-04-16 12:44:38.242214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.353 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.353 [2024-04-16 12:44:38.318648] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.611 [2024-04-16 12:44:38.429543] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.611 [2024-04-16 12:44:38.429605] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.611 [2024-04-16 12:44:38.429629] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.611 [2024-04-16 12:44:38.429646] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.611 [2024-04-16 12:44:38.429663] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.611 [2024-04-16 12:44:38.429733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.611 [2024-04-16 12:44:38.429795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.611 [2024-04-16 12:44:38.429844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.611 [2024-04-16 12:44:38.429849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.176 12:44:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.176 12:44:39 -- common/autotest_common.sh@850 -- # return 0 00:16:40.176 12:44:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:40.176 12:44:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:40.176 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.176 12:44:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.176 12:44:39 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:16:40.176 12:44:39 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:16:40.176 12:44:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.176 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.176 12:44:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.176 12:44:39 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:16:40.176 12:44:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.176 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 12:44:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.434 12:44:39 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:16:40.434 12:44:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.434 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.434 [2024-04-16 12:44:39.320328] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.434 12:44:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.434 12:44:39 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:40.434 12:44:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.434 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.434 Malloc1 00:16:40.434 12:44:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.434 12:44:39 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:40.434 12:44:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.434 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.434 12:44:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.434 12:44:39 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.434 12:44:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.434 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.434 12:44:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.434 12:44:39 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.434 12:44:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.434 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.434 [2024-04-16 12:44:39.372280] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.434 12:44:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.434 12:44:39 -- target/perf_adq.sh@94 -- # perfpid=1201801 00:16:40.434 12:44:39 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:40.434 12:44:39 -- target/perf_adq.sh@95 -- # sleep 2 00:16:40.434 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.355 12:44:41 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:16:42.356 12:44:41 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:16:42.356 12:44:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.356 12:44:41 -- common/autotest_common.sh@10 -- # set +x 00:16:42.356 12:44:41 -- target/perf_adq.sh@97 -- # wc -l 00:16:42.356 12:44:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.356 12:44:41 -- target/perf_adq.sh@97 -- # count=2 00:16:42.356 12:44:41 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:16:42.356 12:44:41 -- target/perf_adq.sh@103 -- # wait 1201801 00:16:50.464 Initializing NVMe Controllers 00:16:50.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:50.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:50.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:50.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:50.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:50.464 Initialization complete. Launching workers. 00:16:50.464 ======================================================== 00:16:50.464 Latency(us) 00:16:50.464 Device Information : IOPS MiB/s Average min max 00:16:50.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4573.00 17.86 13997.51 2292.22 61268.22 00:16:50.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4686.50 18.31 13656.42 1989.48 62288.18 00:16:50.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4662.70 18.21 13728.21 1917.16 61794.64 00:16:50.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12874.60 50.29 4970.94 1490.48 46009.44 00:16:50.464 ======================================================== 00:16:50.464 Total : 26796.80 104.67 9554.16 1490.48 62288.18 00:16:50.464 00:16:50.464 12:44:49 -- target/perf_adq.sh@104 -- # nvmftestfini 00:16:50.464 12:44:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:50.464 12:44:49 -- nvmf/common.sh@117 -- # sync 00:16:50.464 12:44:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:50.464 12:44:49 -- nvmf/common.sh@120 -- # set +e 00:16:50.464 12:44:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:50.464 12:44:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:50.464 rmmod nvme_tcp 00:16:50.721 rmmod nvme_fabrics 00:16:50.721 rmmod nvme_keyring 00:16:50.721 12:44:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.721 12:44:49 -- nvmf/common.sh@124 -- # set -e 00:16:50.721 12:44:49 -- nvmf/common.sh@125 -- # return 0 00:16:50.721 12:44:49 -- nvmf/common.sh@478 -- # '[' -n 1201586 ']' 00:16:50.721 12:44:49 -- nvmf/common.sh@479 -- # killprocess 1201586 00:16:50.721 12:44:49 -- common/autotest_common.sh@936 -- # '[' -z 1201586 ']' 00:16:50.721 12:44:49 -- common/autotest_common.sh@940 -- # kill -0 1201586 00:16:50.721 12:44:49 -- common/autotest_common.sh@941 -- # uname 00:16:50.722 12:44:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.722 12:44:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1201586 00:16:50.722 12:44:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:50.722 12:44:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:50.722 12:44:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1201586' 00:16:50.722 killing process with pid 1201586 00:16:50.722 12:44:49 -- common/autotest_common.sh@955 -- # kill 1201586 00:16:50.722 12:44:49 -- common/autotest_common.sh@960 -- # wait 1201586 00:16:50.981 12:44:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:50.981 12:44:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:50.981 12:44:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:50.981 12:44:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.981 12:44:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.981 12:44:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.981 12:44:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.981 12:44:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.267 12:44:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.267 12:44:52 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:16:54.267 00:16:54.267 real 0m46.148s 00:16:54.267 user 2m42.566s 00:16:54.267 sys 0m10.198s 00:16:54.267 12:44:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:54.267 12:44:52 -- common/autotest_common.sh@10 -- # set +x 00:16:54.267 ************************************ 00:16:54.267 END TEST nvmf_perf_adq 00:16:54.267 ************************************ 00:16:54.267 12:44:52 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:54.267 12:44:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:54.267 12:44:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.267 12:44:52 -- common/autotest_common.sh@10 -- # set +x 00:16:54.267 ************************************ 00:16:54.267 START TEST nvmf_shutdown 00:16:54.267 ************************************ 00:16:54.267 12:44:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:54.267 * Looking for test storage... 00:16:54.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.267 12:44:53 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.267 12:44:53 -- nvmf/common.sh@7 -- # uname -s 00:16:54.267 12:44:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.267 12:44:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.267 12:44:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.267 12:44:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.267 12:44:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.267 12:44:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.267 12:44:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.267 12:44:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.267 12:44:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.267 12:44:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.267 12:44:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:54.267 12:44:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:16:54.267 12:44:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.267 12:44:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.267 12:44:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.267 12:44:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.267 12:44:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.267 12:44:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.267 12:44:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.267 12:44:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.267 12:44:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.267 12:44:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.267 12:44:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.267 12:44:53 -- paths/export.sh@5 -- # export PATH 00:16:54.267 12:44:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.267 12:44:53 -- nvmf/common.sh@47 -- # : 0 00:16:54.267 12:44:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.267 12:44:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.267 12:44:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.267 12:44:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.267 12:44:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.267 12:44:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.267 12:44:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.267 12:44:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.267 12:44:53 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.267 12:44:53 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.267 12:44:53 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:54.267 12:44:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:54.267 12:44:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.267 12:44:53 -- common/autotest_common.sh@10 -- # set +x 00:16:54.267 ************************************ 00:16:54.267 START TEST nvmf_shutdown_tc1 00:16:54.267 ************************************ 00:16:54.267 12:44:53 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:54.267 12:44:53 -- target/shutdown.sh@74 -- # starttarget 00:16:54.267 12:44:53 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:54.267 12:44:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:54.267 12:44:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.267 12:44:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:54.267 12:44:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:54.267 12:44:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:54.267 12:44:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.267 12:44:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.267 12:44:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.267 12:44:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:54.267 12:44:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:54.267 12:44:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.267 12:44:53 -- common/autotest_common.sh@10 -- # set +x 00:16:56.798 12:44:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:56.798 12:44:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.798 12:44:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.798 12:44:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.798 12:44:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.798 12:44:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.798 12:44:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.798 12:44:55 -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.798 12:44:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.798 12:44:55 -- nvmf/common.sh@296 -- # e810=() 00:16:56.798 12:44:55 -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.798 12:44:55 -- nvmf/common.sh@297 -- # x722=() 00:16:56.798 12:44:55 -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.798 12:44:55 -- nvmf/common.sh@298 -- # mlx=() 00:16:56.798 12:44:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.798 12:44:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.798 12:44:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.798 12:44:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.798 12:44:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.798 12:44:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.798 12:44:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:16:56.798 Found 0000:82:00.0 (0x8086 - 0x159b) 00:16:56.798 12:44:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.798 12:44:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:16:56.798 Found 0000:82:00.1 (0x8086 - 0x159b) 00:16:56.798 12:44:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.798 12:44:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.798 12:44:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.798 12:44:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:56.798 12:44:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.798 12:44:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:16:56.798 Found net devices under 0000:82:00.0: cvl_0_0 00:16:56.798 12:44:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.798 12:44:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.798 12:44:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.798 12:44:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:56.798 12:44:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.798 12:44:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:16:56.798 Found net devices under 0000:82:00.1: cvl_0_1 00:16:56.798 12:44:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.798 12:44:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:56.798 12:44:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:56.798 12:44:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:56.798 12:44:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:56.798 12:44:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.798 12:44:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.798 12:44:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.798 12:44:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.798 12:44:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.798 12:44:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.798 12:44:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.798 12:44:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.798 12:44:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.798 12:44:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.798 12:44:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.798 12:44:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.798 12:44:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.798 12:44:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.798 12:44:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.798 12:44:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.798 12:44:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.065 12:44:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.065 12:44:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.065 12:44:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:16:57.065 00:16:57.065 --- 10.0.0.2 ping statistics --- 00:16:57.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.065 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:57.065 12:44:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:16:57.065 00:16:57.065 --- 10.0.0.1 ping statistics --- 00:16:57.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.065 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:16:57.065 12:44:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.065 12:44:55 -- nvmf/common.sh@411 -- # return 0 00:16:57.065 12:44:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:57.065 12:44:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.065 12:44:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:57.065 12:44:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:57.065 12:44:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.065 12:44:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:57.065 12:44:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:57.065 12:44:55 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:57.065 12:44:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:57.065 12:44:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:57.065 12:44:55 -- common/autotest_common.sh@10 -- # set +x 00:16:57.065 12:44:55 -- nvmf/common.sh@470 -- # nvmfpid=1205409 00:16:57.065 12:44:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:57.065 12:44:55 -- nvmf/common.sh@471 -- # waitforlisten 1205409 00:16:57.065 12:44:55 -- common/autotest_common.sh@817 -- # '[' -z 1205409 ']' 00:16:57.065 12:44:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.065 12:44:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:57.065 12:44:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.065 12:44:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:57.065 12:44:55 -- common/autotest_common.sh@10 -- # set +x 00:16:57.065 [2024-04-16 12:44:55.970328] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:16:57.065 [2024-04-16 12:44:55.970416] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.065 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.065 [2024-04-16 12:44:56.047261] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.322 [2024-04-16 12:44:56.157733] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.322 [2024-04-16 12:44:56.157788] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.322 [2024-04-16 12:44:56.157804] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.322 [2024-04-16 12:44:56.157816] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.322 [2024-04-16 12:44:56.157827] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.322 [2024-04-16 12:44:56.157951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.323 [2024-04-16 12:44:56.158014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.323 [2024-04-16 12:44:56.158080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:57.323 [2024-04-16 12:44:56.158082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.887 12:44:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:57.887 12:44:56 -- common/autotest_common.sh@850 -- # return 0 00:16:57.887 12:44:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:57.887 12:44:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:57.887 12:44:56 -- common/autotest_common.sh@10 -- # set +x 00:16:57.887 12:44:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.887 12:44:56 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.887 12:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.887 12:44:56 -- common/autotest_common.sh@10 -- # set +x 00:16:57.887 [2024-04-16 12:44:56.934464] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.887 12:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.887 12:44:56 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:57.887 12:44:56 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:57.887 12:44:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:57.887 12:44:56 -- common/autotest_common.sh@10 -- # set +x 00:16:57.888 12:44:56 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:57.888 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:57.888 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:57.888 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:57.888 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:57.888 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:57.888 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:57.888 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:57.888 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:57.888 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:57.888 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:57.888 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.146 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:58.146 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.146 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:58.146 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.146 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:58.146 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.146 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:58.146 12:44:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:58.146 12:44:56 -- target/shutdown.sh@28 -- # cat 00:16:58.146 12:44:56 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:58.146 12:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.146 12:44:56 -- common/autotest_common.sh@10 -- # set +x 00:16:58.146 Malloc1 00:16:58.146 [2024-04-16 12:44:57.009455] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.146 Malloc2 00:16:58.146 Malloc3 00:16:58.146 Malloc4 00:16:58.146 Malloc5 00:16:58.404 Malloc6 00:16:58.404 Malloc7 00:16:58.404 Malloc8 00:16:58.404 Malloc9 00:16:58.404 Malloc10 00:16:58.404 12:44:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.404 12:44:57 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:58.404 12:44:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:58.404 12:44:57 -- common/autotest_common.sh@10 -- # set +x 00:16:58.663 12:44:57 -- target/shutdown.sh@78 -- # perfpid=1205602 00:16:58.663 12:44:57 -- target/shutdown.sh@79 -- # waitforlisten 1205602 /var/tmp/bdevperf.sock 00:16:58.663 12:44:57 -- common/autotest_common.sh@817 -- # '[' -z 1205602 ']' 00:16:58.663 12:44:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.663 12:44:57 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:58.663 12:44:57 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:58.663 12:44:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.663 12:44:57 -- nvmf/common.sh@521 -- # config=() 00:16:58.663 12:44:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.663 12:44:57 -- nvmf/common.sh@521 -- # local subsystem config 00:16:58.663 12:44:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.663 12:44:57 -- common/autotest_common.sh@10 -- # set +x 00:16:58.663 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.663 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.663 { 00:16:58.663 "params": { 00:16:58.663 "name": "Nvme$subsystem", 00:16:58.663 "trtype": "$TEST_TRANSPORT", 00:16:58.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.663 "adrfam": "ipv4", 00:16:58.663 "trsvcid": "$NVMF_PORT", 00:16:58.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.663 "hdgst": ${hdgst:-false}, 00:16:58.663 "ddgst": ${ddgst:-false} 00:16:58.663 }, 00:16:58.663 "method": "bdev_nvme_attach_controller" 00:16:58.663 } 00:16:58.663 EOF 00:16:58.663 )") 00:16:58.663 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.663 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.663 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.663 { 00:16:58.663 "params": { 00:16:58.663 "name": "Nvme$subsystem", 00:16:58.663 "trtype": "$TEST_TRANSPORT", 00:16:58.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.663 "adrfam": "ipv4", 00:16:58.663 "trsvcid": "$NVMF_PORT", 00:16:58.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.663 "hdgst": ${hdgst:-false}, 00:16:58.663 "ddgst": ${ddgst:-false} 00:16:58.663 }, 00:16:58.663 "method": "bdev_nvme_attach_controller" 00:16:58.663 } 00:16:58.663 EOF 00:16:58.663 )") 00:16:58.663 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.663 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.663 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.663 { 00:16:58.663 "params": { 00:16:58.663 "name": "Nvme$subsystem", 00:16:58.663 "trtype": "$TEST_TRANSPORT", 00:16:58.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.663 "adrfam": "ipv4", 00:16:58.663 "trsvcid": "$NVMF_PORT", 00:16:58.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.663 "hdgst": ${hdgst:-false}, 00:16:58.663 "ddgst": ${ddgst:-false} 00:16:58.663 }, 00:16:58.663 "method": "bdev_nvme_attach_controller" 00:16:58.663 } 00:16:58.663 EOF 00:16:58.663 )") 00:16:58.663 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.663 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.663 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.663 { 00:16:58.663 "params": { 00:16:58.663 "name": "Nvme$subsystem", 00:16:58.663 "trtype": "$TEST_TRANSPORT", 00:16:58.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.663 "adrfam": "ipv4", 00:16:58.663 "trsvcid": "$NVMF_PORT", 00:16:58.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.664 "hdgst": ${hdgst:-false}, 00:16:58.664 "ddgst": ${ddgst:-false} 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 } 00:16:58.664 EOF 00:16:58.664 )") 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.664 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.664 { 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme$subsystem", 00:16:58.664 "trtype": "$TEST_TRANSPORT", 00:16:58.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "$NVMF_PORT", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.664 "hdgst": ${hdgst:-false}, 00:16:58.664 "ddgst": ${ddgst:-false} 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 } 00:16:58.664 EOF 00:16:58.664 )") 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.664 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.664 { 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme$subsystem", 00:16:58.664 "trtype": "$TEST_TRANSPORT", 00:16:58.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "$NVMF_PORT", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.664 "hdgst": ${hdgst:-false}, 00:16:58.664 "ddgst": ${ddgst:-false} 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 } 00:16:58.664 EOF 00:16:58.664 )") 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.664 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.664 { 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme$subsystem", 00:16:58.664 "trtype": "$TEST_TRANSPORT", 00:16:58.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "$NVMF_PORT", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.664 "hdgst": ${hdgst:-false}, 00:16:58.664 "ddgst": ${ddgst:-false} 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 } 00:16:58.664 EOF 00:16:58.664 )") 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.664 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.664 { 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme$subsystem", 00:16:58.664 "trtype": "$TEST_TRANSPORT", 00:16:58.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "$NVMF_PORT", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.664 "hdgst": ${hdgst:-false}, 00:16:58.664 "ddgst": ${ddgst:-false} 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 } 00:16:58.664 EOF 00:16:58.664 )") 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.664 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.664 { 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme$subsystem", 00:16:58.664 "trtype": "$TEST_TRANSPORT", 00:16:58.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "$NVMF_PORT", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.664 "hdgst": ${hdgst:-false}, 00:16:58.664 "ddgst": ${ddgst:-false} 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 } 00:16:58.664 EOF 00:16:58.664 )") 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.664 12:44:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:58.664 { 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme$subsystem", 00:16:58.664 "trtype": "$TEST_TRANSPORT", 00:16:58.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "$NVMF_PORT", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.664 "hdgst": ${hdgst:-false}, 00:16:58.664 "ddgst": ${ddgst:-false} 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 } 00:16:58.664 EOF 00:16:58.664 )") 00:16:58.664 12:44:57 -- nvmf/common.sh@543 -- # cat 00:16:58.664 12:44:57 -- nvmf/common.sh@545 -- # jq . 00:16:58.664 12:44:57 -- nvmf/common.sh@546 -- # IFS=, 00:16:58.664 12:44:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme1", 00:16:58.664 "trtype": "tcp", 00:16:58.664 "traddr": "10.0.0.2", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "4420", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.664 "hdgst": false, 00:16:58.664 "ddgst": false 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 },{ 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme2", 00:16:58.664 "trtype": "tcp", 00:16:58.664 "traddr": "10.0.0.2", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "4420", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:58.664 "hdgst": false, 00:16:58.664 "ddgst": false 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 },{ 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme3", 00:16:58.664 "trtype": "tcp", 00:16:58.664 "traddr": "10.0.0.2", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "4420", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:58.664 "hdgst": false, 00:16:58.664 "ddgst": false 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 },{ 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme4", 00:16:58.664 "trtype": "tcp", 00:16:58.664 "traddr": "10.0.0.2", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "4420", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:58.664 "hdgst": false, 00:16:58.664 "ddgst": false 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 },{ 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme5", 00:16:58.664 "trtype": "tcp", 00:16:58.664 "traddr": "10.0.0.2", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "4420", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:58.664 "hdgst": false, 00:16:58.664 "ddgst": false 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 },{ 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme6", 00:16:58.664 "trtype": "tcp", 00:16:58.664 "traddr": "10.0.0.2", 00:16:58.664 "adrfam": "ipv4", 00:16:58.664 "trsvcid": "4420", 00:16:58.664 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:58.664 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:58.664 "hdgst": false, 00:16:58.664 "ddgst": false 00:16:58.664 }, 00:16:58.664 "method": "bdev_nvme_attach_controller" 00:16:58.664 },{ 00:16:58.664 "params": { 00:16:58.664 "name": "Nvme7", 00:16:58.664 "trtype": "tcp", 00:16:58.665 "traddr": "10.0.0.2", 00:16:58.665 "adrfam": "ipv4", 00:16:58.665 "trsvcid": "4420", 00:16:58.665 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:58.665 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:58.665 "hdgst": false, 00:16:58.665 "ddgst": false 00:16:58.665 }, 00:16:58.665 "method": "bdev_nvme_attach_controller" 00:16:58.665 },{ 00:16:58.665 "params": { 00:16:58.665 "name": "Nvme8", 00:16:58.665 "trtype": "tcp", 00:16:58.665 "traddr": "10.0.0.2", 00:16:58.665 "adrfam": "ipv4", 00:16:58.665 "trsvcid": "4420", 00:16:58.665 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:58.665 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:58.665 "hdgst": false, 00:16:58.665 "ddgst": false 00:16:58.665 }, 00:16:58.665 "method": "bdev_nvme_attach_controller" 00:16:58.665 },{ 00:16:58.665 "params": { 00:16:58.665 "name": "Nvme9", 00:16:58.665 "trtype": "tcp", 00:16:58.665 "traddr": "10.0.0.2", 00:16:58.665 "adrfam": "ipv4", 00:16:58.665 "trsvcid": "4420", 00:16:58.665 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:58.665 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:58.665 "hdgst": false, 00:16:58.665 "ddgst": false 00:16:58.665 }, 00:16:58.665 "method": "bdev_nvme_attach_controller" 00:16:58.665 },{ 00:16:58.665 "params": { 00:16:58.665 "name": "Nvme10", 00:16:58.665 "trtype": "tcp", 00:16:58.665 "traddr": "10.0.0.2", 00:16:58.665 "adrfam": "ipv4", 00:16:58.665 "trsvcid": "4420", 00:16:58.665 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:58.665 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:58.665 "hdgst": false, 00:16:58.665 "ddgst": false 00:16:58.665 }, 00:16:58.665 "method": "bdev_nvme_attach_controller" 00:16:58.665 }' 00:16:58.665 [2024-04-16 12:44:57.523243] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:16:58.665 [2024-04-16 12:44:57.523335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:58.665 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.665 [2024-04-16 12:44:57.599922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.665 [2024-04-16 12:44:57.707620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.562 12:44:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:00.562 12:44:59 -- common/autotest_common.sh@850 -- # return 0 00:17:00.562 12:44:59 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:00.562 12:44:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.562 12:44:59 -- common/autotest_common.sh@10 -- # set +x 00:17:00.562 12:44:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.562 12:44:59 -- target/shutdown.sh@83 -- # kill -9 1205602 00:17:00.562 12:44:59 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:17:00.562 12:44:59 -- target/shutdown.sh@87 -- # sleep 1 00:17:01.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1205602 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:17:01.127 12:45:00 -- target/shutdown.sh@88 -- # kill -0 1205409 00:17:01.127 12:45:00 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:01.127 12:45:00 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:01.127 12:45:00 -- nvmf/common.sh@521 -- # config=() 00:17:01.127 12:45:00 -- nvmf/common.sh@521 -- # local subsystem config 00:17:01.127 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.127 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.127 { 00:17:01.127 "params": { 00:17:01.127 "name": "Nvme$subsystem", 00:17:01.127 "trtype": "$TEST_TRANSPORT", 00:17:01.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.127 "adrfam": "ipv4", 00:17:01.127 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:01.128 { 00:17:01.128 "params": { 00:17:01.128 "name": "Nvme$subsystem", 00:17:01.128 "trtype": "$TEST_TRANSPORT", 00:17:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.128 "adrfam": "ipv4", 00:17:01.128 "trsvcid": "$NVMF_PORT", 00:17:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.128 "hdgst": ${hdgst:-false}, 00:17:01.128 "ddgst": ${ddgst:-false} 00:17:01.128 }, 00:17:01.128 "method": "bdev_nvme_attach_controller" 00:17:01.128 } 00:17:01.128 EOF 00:17:01.128 )") 00:17:01.128 12:45:00 -- nvmf/common.sh@543 -- # cat 00:17:01.128 12:45:00 -- nvmf/common.sh@545 -- # jq . 00:17:01.387 12:45:00 -- nvmf/common.sh@546 -- # IFS=, 00:17:01.387 12:45:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme1", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme2", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme3", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme4", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme5", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme6", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme7", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme8", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme9", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 },{ 00:17:01.387 "params": { 00:17:01.387 "name": "Nvme10", 00:17:01.387 "trtype": "tcp", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "adrfam": "ipv4", 00:17:01.387 "trsvcid": "4420", 00:17:01.387 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:01.387 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:01.387 "hdgst": false, 00:17:01.387 "ddgst": false 00:17:01.387 }, 00:17:01.387 "method": "bdev_nvme_attach_controller" 00:17:01.387 }' 00:17:01.387 [2024-04-16 12:45:00.203349] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:01.387 [2024-04-16 12:45:00.203429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206034 ] 00:17:01.387 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.387 [2024-04-16 12:45:00.282515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.387 [2024-04-16 12:45:00.391634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.387 [2024-04-16 12:45:00.400957] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:17:02.759 Running I/O for 1 seconds... 00:17:04.128 00:17:04.128 Latency(us) 00:17:04.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.128 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme1n1 : 1.16 220.43 13.78 0.00 0.00 285848.27 22427.88 259425.47 00:17:04.128 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme2n1 : 1.17 219.41 13.71 0.00 0.00 284325.36 27573.67 257872.02 00:17:04.128 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme3n1 : 1.11 230.94 14.43 0.00 0.00 264973.27 22039.51 259425.47 00:17:04.128 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme4n1 : 1.10 231.98 14.50 0.00 0.00 259248.73 19223.89 260978.92 00:17:04.128 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme5n1 : 1.17 218.11 13.63 0.00 0.00 271930.41 22913.33 271853.04 00:17:04.128 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme6n1 : 1.18 216.71 13.54 0.00 0.00 269669.45 22622.06 296708.17 00:17:04.128 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme7n1 : 1.15 223.28 13.96 0.00 0.00 256099.93 20291.89 242337.56 00:17:04.128 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme8n1 : 1.16 220.64 13.79 0.00 0.00 255337.43 20486.07 262532.36 00:17:04.128 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme9n1 : 1.19 269.99 16.87 0.00 0.00 205605.09 19418.07 262532.36 00:17:04.128 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:04.128 Verification LBA range: start 0x0 length 0x400 00:17:04.128 Nvme10n1 : 1.17 217.90 13.62 0.00 0.00 249789.44 22233.69 271853.04 00:17:04.128 =================================================================================================================== 00:17:04.128 Total : 2269.41 141.84 0.00 0.00 258949.14 19223.89 296708.17 00:17:04.128 12:45:03 -- target/shutdown.sh@94 -- # stoptarget 00:17:04.128 12:45:03 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:04.128 12:45:03 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:04.129 12:45:03 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:04.129 12:45:03 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:04.129 12:45:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:04.129 12:45:03 -- nvmf/common.sh@117 -- # sync 00:17:04.129 12:45:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.129 12:45:03 -- nvmf/common.sh@120 -- # set +e 00:17:04.129 12:45:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.129 12:45:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.386 rmmod nvme_tcp 00:17:04.386 rmmod nvme_fabrics 00:17:04.386 rmmod nvme_keyring 00:17:04.386 12:45:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.386 12:45:03 -- nvmf/common.sh@124 -- # set -e 00:17:04.386 12:45:03 -- nvmf/common.sh@125 -- # return 0 00:17:04.386 12:45:03 -- nvmf/common.sh@478 -- # '[' -n 1205409 ']' 00:17:04.386 12:45:03 -- nvmf/common.sh@479 -- # killprocess 1205409 00:17:04.386 12:45:03 -- common/autotest_common.sh@936 -- # '[' -z 1205409 ']' 00:17:04.386 12:45:03 -- common/autotest_common.sh@940 -- # kill -0 1205409 00:17:04.386 12:45:03 -- common/autotest_common.sh@941 -- # uname 00:17:04.386 12:45:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:04.386 12:45:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1205409 00:17:04.386 12:45:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:04.386 12:45:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:04.386 12:45:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1205409' 00:17:04.386 killing process with pid 1205409 00:17:04.386 12:45:03 -- common/autotest_common.sh@955 -- # kill 1205409 00:17:04.386 12:45:03 -- common/autotest_common.sh@960 -- # wait 1205409 00:17:04.952 12:45:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:04.952 12:45:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:04.952 12:45:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:04.952 12:45:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.953 12:45:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.953 12:45:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.953 12:45:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.953 12:45:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.853 12:45:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.853 00:17:06.853 real 0m12.566s 00:17:06.853 user 0m35.072s 00:17:06.853 sys 0m3.551s 00:17:06.853 12:45:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:06.853 12:45:05 -- common/autotest_common.sh@10 -- # set +x 00:17:06.853 ************************************ 00:17:06.853 END TEST nvmf_shutdown_tc1 00:17:06.853 ************************************ 00:17:06.853 12:45:05 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:06.853 12:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:06.853 12:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.853 12:45:05 -- common/autotest_common.sh@10 -- # set +x 00:17:07.119 ************************************ 00:17:07.119 START TEST nvmf_shutdown_tc2 00:17:07.119 ************************************ 00:17:07.119 12:45:05 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:17:07.119 12:45:05 -- target/shutdown.sh@99 -- # starttarget 00:17:07.119 12:45:05 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:07.119 12:45:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:07.119 12:45:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.119 12:45:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:07.119 12:45:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:07.119 12:45:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:07.119 12:45:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.119 12:45:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.119 12:45:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.119 12:45:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:07.119 12:45:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:07.119 12:45:05 -- common/autotest_common.sh@10 -- # set +x 00:17:07.119 12:45:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:07.119 12:45:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:07.119 12:45:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:07.119 12:45:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:07.119 12:45:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:07.119 12:45:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:07.119 12:45:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:07.119 12:45:05 -- nvmf/common.sh@295 -- # net_devs=() 00:17:07.119 12:45:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:07.119 12:45:05 -- nvmf/common.sh@296 -- # e810=() 00:17:07.119 12:45:05 -- nvmf/common.sh@296 -- # local -ga e810 00:17:07.119 12:45:05 -- nvmf/common.sh@297 -- # x722=() 00:17:07.119 12:45:05 -- nvmf/common.sh@297 -- # local -ga x722 00:17:07.119 12:45:05 -- nvmf/common.sh@298 -- # mlx=() 00:17:07.119 12:45:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:07.119 12:45:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.119 12:45:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:07.119 12:45:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:07.119 12:45:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:07.119 12:45:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.119 12:45:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:07.119 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:07.119 12:45:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.119 12:45:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:07.119 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:07.119 12:45:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:07.119 12:45:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.119 12:45:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.119 12:45:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:07.119 12:45:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.119 12:45:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:07.119 Found net devices under 0000:82:00.0: cvl_0_0 00:17:07.119 12:45:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.119 12:45:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.119 12:45:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.119 12:45:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:07.119 12:45:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.119 12:45:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:07.119 Found net devices under 0000:82:00.1: cvl_0_1 00:17:07.119 12:45:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.119 12:45:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:07.119 12:45:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:07.119 12:45:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:07.119 12:45:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:07.119 12:45:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.119 12:45:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.119 12:45:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.119 12:45:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:07.119 12:45:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.119 12:45:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.119 12:45:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:07.119 12:45:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.119 12:45:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.119 12:45:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:07.119 12:45:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:07.119 12:45:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.119 12:45:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:07.119 12:45:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:07.119 12:45:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:07.119 12:45:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:07.119 12:45:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:07.119 12:45:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:07.119 12:45:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:07.119 12:45:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:07.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:17:07.119 00:17:07.119 --- 10.0.0.2 ping statistics --- 00:17:07.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.119 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:17:07.119 12:45:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:07.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:17:07.119 00:17:07.119 --- 10.0.0.1 ping statistics --- 00:17:07.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.119 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:07.119 12:45:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.119 12:45:06 -- nvmf/common.sh@411 -- # return 0 00:17:07.119 12:45:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:07.119 12:45:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.119 12:45:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:07.119 12:45:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:07.119 12:45:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.119 12:45:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:07.119 12:45:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:07.119 12:45:06 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:07.119 12:45:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:07.119 12:45:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:07.119 12:45:06 -- common/autotest_common.sh@10 -- # set +x 00:17:07.119 12:45:06 -- nvmf/common.sh@470 -- # nvmfpid=1206899 00:17:07.119 12:45:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:07.119 12:45:06 -- nvmf/common.sh@471 -- # waitforlisten 1206899 00:17:07.119 12:45:06 -- common/autotest_common.sh@817 -- # '[' -z 1206899 ']' 00:17:07.119 12:45:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.119 12:45:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:07.119 12:45:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.120 12:45:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:07.120 12:45:06 -- common/autotest_common.sh@10 -- # set +x 00:17:07.378 [2024-04-16 12:45:06.196495] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:07.378 [2024-04-16 12:45:06.196583] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.378 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.378 [2024-04-16 12:45:06.277838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.378 [2024-04-16 12:45:06.394676] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.378 [2024-04-16 12:45:06.394729] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.378 [2024-04-16 12:45:06.394759] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.378 [2024-04-16 12:45:06.394778] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.378 [2024-04-16 12:45:06.394790] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.378 [2024-04-16 12:45:06.394864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.378 [2024-04-16 12:45:06.394948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.378 [2024-04-16 12:45:06.395017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.378 [2024-04-16 12:45:06.395014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:08.312 12:45:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:08.312 12:45:07 -- common/autotest_common.sh@850 -- # return 0 00:17:08.312 12:45:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:08.312 12:45:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:08.312 12:45:07 -- common/autotest_common.sh@10 -- # set +x 00:17:08.312 12:45:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.312 12:45:07 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.312 12:45:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.312 12:45:07 -- common/autotest_common.sh@10 -- # set +x 00:17:08.312 [2024-04-16 12:45:07.156303] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.312 12:45:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.312 12:45:07 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:08.312 12:45:07 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:08.312 12:45:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:08.312 12:45:07 -- common/autotest_common.sh@10 -- # set +x 00:17:08.312 12:45:07 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:08.312 12:45:07 -- target/shutdown.sh@28 -- # cat 00:17:08.312 12:45:07 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:08.312 12:45:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.312 12:45:07 -- common/autotest_common.sh@10 -- # set +x 00:17:08.312 Malloc1 00:17:08.312 [2024-04-16 12:45:07.235606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.312 Malloc2 00:17:08.312 Malloc3 00:17:08.312 Malloc4 00:17:08.570 Malloc5 00:17:08.570 Malloc6 00:17:08.570 Malloc7 00:17:08.570 Malloc8 00:17:08.570 Malloc9 00:17:08.829 Malloc10 00:17:08.829 12:45:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.829 12:45:07 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:08.829 12:45:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:08.829 12:45:07 -- common/autotest_common.sh@10 -- # set +x 00:17:08.829 12:45:07 -- target/shutdown.sh@103 -- # perfpid=1207418 00:17:08.829 12:45:07 -- target/shutdown.sh@104 -- # waitforlisten 1207418 /var/tmp/bdevperf.sock 00:17:08.829 12:45:07 -- common/autotest_common.sh@817 -- # '[' -z 1207418 ']' 00:17:08.829 12:45:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.829 12:45:07 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:08.829 12:45:07 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:08.829 12:45:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:08.829 12:45:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.829 12:45:07 -- nvmf/common.sh@521 -- # config=() 00:17:08.829 12:45:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:08.829 12:45:07 -- nvmf/common.sh@521 -- # local subsystem config 00:17:08.829 12:45:07 -- common/autotest_common.sh@10 -- # set +x 00:17:08.829 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.829 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.829 { 00:17:08.829 "params": { 00:17:08.829 "name": "Nvme$subsystem", 00:17:08.829 "trtype": "$TEST_TRANSPORT", 00:17:08.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.829 "adrfam": "ipv4", 00:17:08.829 "trsvcid": "$NVMF_PORT", 00:17:08.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.829 "hdgst": ${hdgst:-false}, 00:17:08.829 "ddgst": ${ddgst:-false} 00:17:08.829 }, 00:17:08.829 "method": "bdev_nvme_attach_controller" 00:17:08.829 } 00:17:08.829 EOF 00:17:08.829 )") 00:17:08.829 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.829 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.829 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.829 { 00:17:08.829 "params": { 00:17:08.829 "name": "Nvme$subsystem", 00:17:08.829 "trtype": "$TEST_TRANSPORT", 00:17:08.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.829 "adrfam": "ipv4", 00:17:08.829 "trsvcid": "$NVMF_PORT", 00:17:08.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.829 "hdgst": ${hdgst:-false}, 00:17:08.829 "ddgst": ${ddgst:-false} 00:17:08.829 }, 00:17:08.829 "method": "bdev_nvme_attach_controller" 00:17:08.829 } 00:17:08.829 EOF 00:17:08.829 )") 00:17:08.829 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.829 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.829 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.829 { 00:17:08.829 "params": { 00:17:08.829 "name": "Nvme$subsystem", 00:17:08.829 "trtype": "$TEST_TRANSPORT", 00:17:08.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "$NVMF_PORT", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.830 "hdgst": ${hdgst:-false}, 00:17:08.830 "ddgst": ${ddgst:-false} 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 } 00:17:08.830 EOF 00:17:08.830 )") 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.830 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.830 { 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme$subsystem", 00:17:08.830 "trtype": "$TEST_TRANSPORT", 00:17:08.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "$NVMF_PORT", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.830 "hdgst": ${hdgst:-false}, 00:17:08.830 "ddgst": ${ddgst:-false} 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 } 00:17:08.830 EOF 00:17:08.830 )") 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.830 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.830 { 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme$subsystem", 00:17:08.830 "trtype": "$TEST_TRANSPORT", 00:17:08.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "$NVMF_PORT", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.830 "hdgst": ${hdgst:-false}, 00:17:08.830 "ddgst": ${ddgst:-false} 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 } 00:17:08.830 EOF 00:17:08.830 )") 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.830 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.830 { 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme$subsystem", 00:17:08.830 "trtype": "$TEST_TRANSPORT", 00:17:08.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "$NVMF_PORT", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.830 "hdgst": ${hdgst:-false}, 00:17:08.830 "ddgst": ${ddgst:-false} 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 } 00:17:08.830 EOF 00:17:08.830 )") 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.830 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.830 { 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme$subsystem", 00:17:08.830 "trtype": "$TEST_TRANSPORT", 00:17:08.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "$NVMF_PORT", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.830 "hdgst": ${hdgst:-false}, 00:17:08.830 "ddgst": ${ddgst:-false} 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 } 00:17:08.830 EOF 00:17:08.830 )") 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.830 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.830 { 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme$subsystem", 00:17:08.830 "trtype": "$TEST_TRANSPORT", 00:17:08.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "$NVMF_PORT", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.830 "hdgst": ${hdgst:-false}, 00:17:08.830 "ddgst": ${ddgst:-false} 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 } 00:17:08.830 EOF 00:17:08.830 )") 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.830 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.830 { 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme$subsystem", 00:17:08.830 "trtype": "$TEST_TRANSPORT", 00:17:08.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "$NVMF_PORT", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.830 "hdgst": ${hdgst:-false}, 00:17:08.830 "ddgst": ${ddgst:-false} 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 } 00:17:08.830 EOF 00:17:08.830 )") 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.830 12:45:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:08.830 { 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme$subsystem", 00:17:08.830 "trtype": "$TEST_TRANSPORT", 00:17:08.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "$NVMF_PORT", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.830 "hdgst": ${hdgst:-false}, 00:17:08.830 "ddgst": ${ddgst:-false} 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 } 00:17:08.830 EOF 00:17:08.830 )") 00:17:08.830 12:45:07 -- nvmf/common.sh@543 -- # cat 00:17:08.830 12:45:07 -- nvmf/common.sh@545 -- # jq . 00:17:08.830 12:45:07 -- nvmf/common.sh@546 -- # IFS=, 00:17:08.830 12:45:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme1", 00:17:08.830 "trtype": "tcp", 00:17:08.830 "traddr": "10.0.0.2", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "4420", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.830 "hdgst": false, 00:17:08.830 "ddgst": false 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 },{ 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme2", 00:17:08.830 "trtype": "tcp", 00:17:08.830 "traddr": "10.0.0.2", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "4420", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:08.830 "hdgst": false, 00:17:08.830 "ddgst": false 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 },{ 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme3", 00:17:08.830 "trtype": "tcp", 00:17:08.830 "traddr": "10.0.0.2", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "4420", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:08.830 "hdgst": false, 00:17:08.830 "ddgst": false 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 },{ 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme4", 00:17:08.830 "trtype": "tcp", 00:17:08.830 "traddr": "10.0.0.2", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "4420", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:08.830 "hdgst": false, 00:17:08.830 "ddgst": false 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 },{ 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme5", 00:17:08.830 "trtype": "tcp", 00:17:08.830 "traddr": "10.0.0.2", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "4420", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:08.830 "hdgst": false, 00:17:08.830 "ddgst": false 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 },{ 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme6", 00:17:08.830 "trtype": "tcp", 00:17:08.830 "traddr": "10.0.0.2", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "4420", 00:17:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:08.830 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:08.830 "hdgst": false, 00:17:08.830 "ddgst": false 00:17:08.830 }, 00:17:08.830 "method": "bdev_nvme_attach_controller" 00:17:08.830 },{ 00:17:08.830 "params": { 00:17:08.830 "name": "Nvme7", 00:17:08.830 "trtype": "tcp", 00:17:08.830 "traddr": "10.0.0.2", 00:17:08.830 "adrfam": "ipv4", 00:17:08.830 "trsvcid": "4420", 00:17:08.831 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:08.831 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:08.831 "hdgst": false, 00:17:08.831 "ddgst": false 00:17:08.831 }, 00:17:08.831 "method": "bdev_nvme_attach_controller" 00:17:08.831 },{ 00:17:08.831 "params": { 00:17:08.831 "name": "Nvme8", 00:17:08.831 "trtype": "tcp", 00:17:08.831 "traddr": "10.0.0.2", 00:17:08.831 "adrfam": "ipv4", 00:17:08.831 "trsvcid": "4420", 00:17:08.831 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:08.831 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:08.831 "hdgst": false, 00:17:08.831 "ddgst": false 00:17:08.831 }, 00:17:08.831 "method": "bdev_nvme_attach_controller" 00:17:08.831 },{ 00:17:08.831 "params": { 00:17:08.831 "name": "Nvme9", 00:17:08.831 "trtype": "tcp", 00:17:08.831 "traddr": "10.0.0.2", 00:17:08.831 "adrfam": "ipv4", 00:17:08.831 "trsvcid": "4420", 00:17:08.831 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:08.831 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:08.831 "hdgst": false, 00:17:08.831 "ddgst": false 00:17:08.831 }, 00:17:08.831 "method": "bdev_nvme_attach_controller" 00:17:08.831 },{ 00:17:08.831 "params": { 00:17:08.831 "name": "Nvme10", 00:17:08.831 "trtype": "tcp", 00:17:08.831 "traddr": "10.0.0.2", 00:17:08.831 "adrfam": "ipv4", 00:17:08.831 "trsvcid": "4420", 00:17:08.831 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:08.831 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:08.831 "hdgst": false, 00:17:08.831 "ddgst": false 00:17:08.831 }, 00:17:08.831 "method": "bdev_nvme_attach_controller" 00:17:08.831 }' 00:17:08.831 [2024-04-16 12:45:07.745692] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:08.831 [2024-04-16 12:45:07.745771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207418 ] 00:17:08.831 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.831 [2024-04-16 12:45:07.818718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.089 [2024-04-16 12:45:07.929184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.988 Running I/O for 10 seconds... 00:17:10.988 12:45:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:10.988 12:45:09 -- common/autotest_common.sh@850 -- # return 0 00:17:10.988 12:45:09 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:10.988 12:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.988 12:45:09 -- common/autotest_common.sh@10 -- # set +x 00:17:10.988 12:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.988 12:45:09 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:10.988 12:45:09 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:10.988 12:45:09 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:10.988 12:45:09 -- target/shutdown.sh@57 -- # local ret=1 00:17:10.988 12:45:09 -- target/shutdown.sh@58 -- # local i 00:17:10.988 12:45:09 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:10.988 12:45:09 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:10.988 12:45:09 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:10.988 12:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.988 12:45:09 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:10.988 12:45:09 -- common/autotest_common.sh@10 -- # set +x 00:17:10.988 12:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.988 12:45:09 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:10.988 12:45:09 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:10.988 12:45:09 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:11.246 12:45:10 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:11.246 12:45:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:11.246 12:45:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:11.246 12:45:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:11.246 12:45:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.246 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:17:11.246 12:45:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.246 12:45:10 -- target/shutdown.sh@60 -- # read_io_count=72 00:17:11.246 12:45:10 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:17:11.246 12:45:10 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:11.504 12:45:10 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:11.504 12:45:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:11.504 12:45:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:11.504 12:45:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:11.504 12:45:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.504 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:17:11.504 12:45:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.504 12:45:10 -- target/shutdown.sh@60 -- # read_io_count=140 00:17:11.504 12:45:10 -- target/shutdown.sh@63 -- # '[' 140 -ge 100 ']' 00:17:11.504 12:45:10 -- target/shutdown.sh@64 -- # ret=0 00:17:11.504 12:45:10 -- target/shutdown.sh@65 -- # break 00:17:11.504 12:45:10 -- target/shutdown.sh@69 -- # return 0 00:17:11.504 12:45:10 -- target/shutdown.sh@110 -- # killprocess 1207418 00:17:11.504 12:45:10 -- common/autotest_common.sh@936 -- # '[' -z 1207418 ']' 00:17:11.504 12:45:10 -- common/autotest_common.sh@940 -- # kill -0 1207418 00:17:11.504 12:45:10 -- common/autotest_common.sh@941 -- # uname 00:17:11.504 12:45:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.504 12:45:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1207418 00:17:11.504 12:45:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:11.504 12:45:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:11.504 12:45:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1207418' 00:17:11.504 killing process with pid 1207418 00:17:11.504 12:45:10 -- common/autotest_common.sh@955 -- # kill 1207418 00:17:11.505 12:45:10 -- common/autotest_common.sh@960 -- # wait 1207418 00:17:11.762 Received shutdown signal, test time was about 0.966153 seconds 00:17:11.762 00:17:11.762 Latency(us) 00:17:11.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.762 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme1n1 : 0.96 266.08 16.63 0.00 0.00 235840.85 20486.07 219035.88 00:17:11.762 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme2n1 : 0.92 223.75 13.98 0.00 0.00 271908.09 11699.39 260978.92 00:17:11.762 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme3n1 : 0.96 267.10 16.69 0.00 0.00 227633.49 17864.63 259425.47 00:17:11.762 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme4n1 : 0.97 265.20 16.58 0.00 0.00 224760.23 18155.90 268746.15 00:17:11.762 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme5n1 : 0.94 204.02 12.75 0.00 0.00 285416.87 37088.52 250104.79 00:17:11.762 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme6n1 : 0.94 204.30 12.77 0.00 0.00 278862.25 22330.79 267192.70 00:17:11.762 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme7n1 : 0.92 208.39 13.02 0.00 0.00 267182.08 41943.04 250104.79 00:17:11.762 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme8n1 : 0.93 206.57 12.91 0.00 0.00 264063.56 25049.32 268746.15 00:17:11.762 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme9n1 : 0.95 202.25 12.64 0.00 0.00 264812.97 28932.93 271853.04 00:17:11.762 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.762 Verification LBA range: start 0x0 length 0x400 00:17:11.762 Nvme10n1 : 0.95 201.57 12.60 0.00 0.00 259870.72 19515.16 290494.39 00:17:11.762 =================================================================================================================== 00:17:11.762 Total : 2249.24 140.58 0.00 0.00 255533.76 11699.39 290494.39 00:17:12.019 12:45:10 -- target/shutdown.sh@113 -- # sleep 1 00:17:12.950 12:45:11 -- target/shutdown.sh@114 -- # kill -0 1206899 00:17:12.950 12:45:11 -- target/shutdown.sh@116 -- # stoptarget 00:17:12.950 12:45:11 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:12.950 12:45:11 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:12.950 12:45:11 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:12.950 12:45:11 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:12.950 12:45:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:12.950 12:45:11 -- nvmf/common.sh@117 -- # sync 00:17:12.950 12:45:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.950 12:45:11 -- nvmf/common.sh@120 -- # set +e 00:17:12.950 12:45:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.950 12:45:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.950 rmmod nvme_tcp 00:17:12.950 rmmod nvme_fabrics 00:17:12.950 rmmod nvme_keyring 00:17:12.950 12:45:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.950 12:45:11 -- nvmf/common.sh@124 -- # set -e 00:17:12.950 12:45:11 -- nvmf/common.sh@125 -- # return 0 00:17:12.950 12:45:11 -- nvmf/common.sh@478 -- # '[' -n 1206899 ']' 00:17:12.951 12:45:11 -- nvmf/common.sh@479 -- # killprocess 1206899 00:17:12.951 12:45:11 -- common/autotest_common.sh@936 -- # '[' -z 1206899 ']' 00:17:12.951 12:45:11 -- common/autotest_common.sh@940 -- # kill -0 1206899 00:17:12.951 12:45:11 -- common/autotest_common.sh@941 -- # uname 00:17:12.951 12:45:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.951 12:45:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1206899 00:17:12.951 12:45:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:12.951 12:45:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:12.951 12:45:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1206899' 00:17:12.951 killing process with pid 1206899 00:17:12.951 12:45:11 -- common/autotest_common.sh@955 -- # kill 1206899 00:17:12.951 12:45:11 -- common/autotest_common.sh@960 -- # wait 1206899 00:17:13.516 12:45:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:13.516 12:45:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:13.516 12:45:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:13.516 12:45:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.516 12:45:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.516 12:45:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.516 12:45:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.516 12:45:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.049 12:45:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.049 00:17:16.049 real 0m8.602s 00:17:16.049 user 0m27.055s 00:17:16.049 sys 0m1.590s 00:17:16.049 12:45:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:16.049 12:45:14 -- common/autotest_common.sh@10 -- # set +x 00:17:16.049 ************************************ 00:17:16.049 END TEST nvmf_shutdown_tc2 00:17:16.049 ************************************ 00:17:16.049 12:45:14 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:16.049 12:45:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:16.049 12:45:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:16.049 12:45:14 -- common/autotest_common.sh@10 -- # set +x 00:17:16.049 ************************************ 00:17:16.049 START TEST nvmf_shutdown_tc3 00:17:16.049 ************************************ 00:17:16.049 12:45:14 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:17:16.049 12:45:14 -- target/shutdown.sh@121 -- # starttarget 00:17:16.049 12:45:14 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:16.049 12:45:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:16.049 12:45:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.049 12:45:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:16.049 12:45:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:16.049 12:45:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:16.049 12:45:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.049 12:45:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.049 12:45:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.049 12:45:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:16.049 12:45:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.049 12:45:14 -- common/autotest_common.sh@10 -- # set +x 00:17:16.049 12:45:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:16.049 12:45:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.049 12:45:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.049 12:45:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.049 12:45:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.049 12:45:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.049 12:45:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.049 12:45:14 -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.049 12:45:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.049 12:45:14 -- nvmf/common.sh@296 -- # e810=() 00:17:16.049 12:45:14 -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.049 12:45:14 -- nvmf/common.sh@297 -- # x722=() 00:17:16.049 12:45:14 -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.049 12:45:14 -- nvmf/common.sh@298 -- # mlx=() 00:17:16.049 12:45:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.049 12:45:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.049 12:45:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.049 12:45:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.049 12:45:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.049 12:45:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.049 12:45:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:16.049 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:16.049 12:45:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.049 12:45:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:16.049 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:16.049 12:45:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.049 12:45:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.049 12:45:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.049 12:45:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.049 12:45:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.049 12:45:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:16.049 Found net devices under 0000:82:00.0: cvl_0_0 00:17:16.049 12:45:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.049 12:45:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.049 12:45:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.049 12:45:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.049 12:45:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.049 12:45:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:16.049 Found net devices under 0000:82:00.1: cvl_0_1 00:17:16.049 12:45:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.049 12:45:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:16.049 12:45:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:16.049 12:45:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:16.049 12:45:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:16.049 12:45:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.049 12:45:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.049 12:45:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.049 12:45:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.049 12:45:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.049 12:45:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.049 12:45:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.049 12:45:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.049 12:45:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.049 12:45:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.049 12:45:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.049 12:45:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.049 12:45:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.049 12:45:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.049 12:45:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.049 12:45:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.049 12:45:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.049 12:45:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.049 12:45:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.049 12:45:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:17:16.050 00:17:16.050 --- 10.0.0.2 ping statistics --- 00:17:16.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.050 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:16.050 12:45:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:17:16.050 00:17:16.050 --- 10.0.0.1 ping statistics --- 00:17:16.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.050 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:17:16.050 12:45:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.050 12:45:14 -- nvmf/common.sh@411 -- # return 0 00:17:16.050 12:45:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:16.050 12:45:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.050 12:45:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:16.050 12:45:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:16.050 12:45:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.050 12:45:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:16.050 12:45:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:16.050 12:45:14 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:16.050 12:45:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:16.050 12:45:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:16.050 12:45:14 -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 12:45:14 -- nvmf/common.sh@470 -- # nvmfpid=1208632 00:17:16.050 12:45:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:16.050 12:45:14 -- nvmf/common.sh@471 -- # waitforlisten 1208632 00:17:16.050 12:45:14 -- common/autotest_common.sh@817 -- # '[' -z 1208632 ']' 00:17:16.050 12:45:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.050 12:45:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.050 12:45:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.050 12:45:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.050 12:45:14 -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 [2024-04-16 12:45:14.889332] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:16.050 [2024-04-16 12:45:14.889412] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.050 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.050 [2024-04-16 12:45:14.966394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.050 [2024-04-16 12:45:15.082756] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.050 [2024-04-16 12:45:15.082810] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.050 [2024-04-16 12:45:15.082825] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.050 [2024-04-16 12:45:15.082863] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.050 [2024-04-16 12:45:15.082873] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.050 [2024-04-16 12:45:15.082962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.050 [2024-04-16 12:45:15.083085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.050 [2024-04-16 12:45:15.083153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.050 [2024-04-16 12:45:15.083156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.991 12:45:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.991 12:45:15 -- common/autotest_common.sh@850 -- # return 0 00:17:16.991 12:45:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:16.991 12:45:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:16.991 12:45:15 -- common/autotest_common.sh@10 -- # set +x 00:17:16.991 12:45:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.991 12:45:15 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.991 12:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.991 12:45:15 -- common/autotest_common.sh@10 -- # set +x 00:17:16.991 [2024-04-16 12:45:15.900293] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.991 12:45:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.991 12:45:15 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:16.991 12:45:15 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:16.991 12:45:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:16.991 12:45:15 -- common/autotest_common.sh@10 -- # set +x 00:17:16.991 12:45:15 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.991 12:45:15 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.991 12:45:15 -- target/shutdown.sh@28 -- # cat 00:17:16.992 12:45:15 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:16.992 12:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.992 12:45:15 -- common/autotest_common.sh@10 -- # set +x 00:17:16.992 Malloc1 00:17:16.992 [2024-04-16 12:45:15.989884] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.992 Malloc2 00:17:17.293 Malloc3 00:17:17.293 Malloc4 00:17:17.293 Malloc5 00:17:17.293 Malloc6 00:17:17.293 Malloc7 00:17:17.293 Malloc8 00:17:17.293 Malloc9 00:17:17.552 Malloc10 00:17:17.552 12:45:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.552 12:45:16 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:17.552 12:45:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:17.552 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:17:17.552 12:45:16 -- target/shutdown.sh@125 -- # perfpid=1208828 00:17:17.552 12:45:16 -- target/shutdown.sh@126 -- # waitforlisten 1208828 /var/tmp/bdevperf.sock 00:17:17.552 12:45:16 -- common/autotest_common.sh@817 -- # '[' -z 1208828 ']' 00:17:17.552 12:45:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.552 12:45:16 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:17.552 12:45:16 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:17.552 12:45:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.552 12:45:16 -- nvmf/common.sh@521 -- # config=() 00:17:17.552 12:45:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.552 12:45:16 -- nvmf/common.sh@521 -- # local subsystem config 00:17:17.552 12:45:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.552 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.552 "trtype": "$TEST_TRANSPORT", 00:17:17.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.552 "adrfam": "ipv4", 00:17:17.552 "trsvcid": "$NVMF_PORT", 00:17:17.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.552 "hdgst": ${hdgst:-false}, 00:17:17.552 "ddgst": ${ddgst:-false} 00:17:17.552 }, 00:17:17.552 "method": "bdev_nvme_attach_controller" 00:17:17.552 } 00:17:17.552 EOF 00:17:17.552 )") 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.552 12:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.552 12:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.552 { 00:17:17.552 "params": { 00:17:17.552 "name": "Nvme$subsystem", 00:17:17.553 "trtype": "$TEST_TRANSPORT", 00:17:17.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "$NVMF_PORT", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.553 "hdgst": ${hdgst:-false}, 00:17:17.553 "ddgst": ${ddgst:-false} 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 } 00:17:17.553 EOF 00:17:17.553 )") 00:17:17.553 12:45:16 -- nvmf/common.sh@543 -- # cat 00:17:17.553 12:45:16 -- nvmf/common.sh@545 -- # jq . 00:17:17.553 12:45:16 -- nvmf/common.sh@546 -- # IFS=, 00:17:17.553 12:45:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme1", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme2", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme3", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme4", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme5", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme6", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme7", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme8", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme9", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 },{ 00:17:17.553 "params": { 00:17:17.553 "name": "Nvme10", 00:17:17.553 "trtype": "tcp", 00:17:17.553 "traddr": "10.0.0.2", 00:17:17.553 "adrfam": "ipv4", 00:17:17.553 "trsvcid": "4420", 00:17:17.553 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:17.553 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:17.553 "hdgst": false, 00:17:17.553 "ddgst": false 00:17:17.553 }, 00:17:17.553 "method": "bdev_nvme_attach_controller" 00:17:17.553 }' 00:17:17.553 [2024-04-16 12:45:16.486313] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:17.553 [2024-04-16 12:45:16.486384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208828 ] 00:17:17.553 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.553 [2024-04-16 12:45:16.561140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.811 [2024-04-16 12:45:16.668440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.710 Running I/O for 10 seconds... 00:17:19.710 12:45:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:19.710 12:45:18 -- common/autotest_common.sh@850 -- # return 0 00:17:19.710 12:45:18 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:19.710 12:45:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.710 12:45:18 -- common/autotest_common.sh@10 -- # set +x 00:17:19.969 12:45:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.969 12:45:18 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:19.969 12:45:18 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:19.969 12:45:18 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:19.969 12:45:18 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:19.969 12:45:18 -- target/shutdown.sh@57 -- # local ret=1 00:17:19.969 12:45:18 -- target/shutdown.sh@58 -- # local i 00:17:19.969 12:45:18 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:19.969 12:45:18 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:19.969 12:45:18 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:19.969 12:45:18 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:19.969 12:45:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.969 12:45:18 -- common/autotest_common.sh@10 -- # set +x 00:17:19.969 12:45:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.969 12:45:18 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:19.969 12:45:18 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:19.969 12:45:18 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:20.239 12:45:19 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:20.239 12:45:19 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:20.239 12:45:19 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:20.239 12:45:19 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:20.239 12:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.239 12:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:20.239 12:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.239 12:45:19 -- target/shutdown.sh@60 -- # read_io_count=67 00:17:20.239 12:45:19 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:17:20.239 12:45:19 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:20.516 12:45:19 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:20.516 12:45:19 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:20.516 12:45:19 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:20.516 12:45:19 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:20.516 12:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.516 12:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:20.516 12:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.516 12:45:19 -- target/shutdown.sh@60 -- # read_io_count=131 00:17:20.516 12:45:19 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:17:20.517 12:45:19 -- target/shutdown.sh@64 -- # ret=0 00:17:20.517 12:45:19 -- target/shutdown.sh@65 -- # break 00:17:20.517 12:45:19 -- target/shutdown.sh@69 -- # return 0 00:17:20.517 12:45:19 -- target/shutdown.sh@135 -- # killprocess 1208632 00:17:20.517 12:45:19 -- common/autotest_common.sh@936 -- # '[' -z 1208632 ']' 00:17:20.517 12:45:19 -- common/autotest_common.sh@940 -- # kill -0 1208632 00:17:20.517 12:45:19 -- common/autotest_common.sh@941 -- # uname 00:17:20.517 12:45:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.517 12:45:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1208632 00:17:20.517 12:45:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:20.517 12:45:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:20.517 12:45:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1208632' 00:17:20.517 killing process with pid 1208632 00:17:20.517 12:45:19 -- common/autotest_common.sh@955 -- # kill 1208632 00:17:20.517 12:45:19 -- common/autotest_common.sh@960 -- # wait 1208632 00:17:20.517 [2024-04-16 12:45:19.462786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.462895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.462909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.462938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.462950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.462964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.462977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.462989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.463704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155080 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.464085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.517 [2024-04-16 12:45:19.464123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.517 [2024-04-16 12:45:19.464141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.517 [2024-04-16 12:45:19.464155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.517 [2024-04-16 12:45:19.464168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.517 [2024-04-16 12:45:19.464188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.517 [2024-04-16 12:45:19.464203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.517 [2024-04-16 12:45:19.464216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.517 [2024-04-16 12:45:19.464229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08f0 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.465337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.465361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.465374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.465386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.517 [2024-04-16 12:45:19.465398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.465996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.466129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170b50 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.518 [2024-04-16 12:45:19.470891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.470904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.470916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.470932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.470945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.470957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.470969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.470982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.470994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.471190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153ac0 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.472988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.473000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.473012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.473025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.473040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.473053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.473065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2153f70 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.474432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.474465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.474481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.474493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.474506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.474519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.474531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.519 [2024-04-16 12:45:19.474544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.474995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.475254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154400 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.520 [2024-04-16 12:45:19.476680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.476993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154760 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.477998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.521 [2024-04-16 12:45:19.478550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.522 [2024-04-16 12:45:19.478570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.522 [2024-04-16 12:45:19.478589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.522 [2024-04-16 12:45:19.478602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2154bf0 is same with the state(5) to be set 00:17:20.522 [2024-04-16 12:45:19.482913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.482955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.482985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.483973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.483987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.522 [2024-04-16 12:45:19.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.522 [2024-04-16 12:45:19.484272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.484884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.484985] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcbc6c0 was disconnected and freed. reset controller. 00:17:20.523 [2024-04-16 12:45:19.485082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.523 [2024-04-16 12:45:19.485802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.523 [2024-04-16 12:45:19.485822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.485835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.485850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.485864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.485879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.485893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.485908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.485921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.485937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.485950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.485965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.485979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.485995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.486971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.486987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.487001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.487016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.487030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.487045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.487058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.487074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.487091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.487108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.487122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.487137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.487156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.487173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.487187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.487202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.524 [2024-04-16 12:45:19.487216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.524 [2024-04-16 12:45:19.487299] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcbd1a0 was disconnected and freed. reset controller. 00:17:20.524 [2024-04-16 12:45:19.487858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.487882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.487898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.487914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.487929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.487942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.487956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.487970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.487983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06f20 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.488038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf20e0 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.488206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c4c0 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.488354] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b08f0 (9): Bad file descriptor 00:17:20.525 [2024-04-16 12:45:19.488404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6040 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.488561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8d2d0 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.488739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd07d0 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.488900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.488977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.488991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd062e0 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.489063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd1130 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.489231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.525 [2024-04-16 12:45:19.489335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04e60 is same with the state(5) to be set 00:17:20.525 [2024-04-16 12:45:19.489453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.525 [2024-04-16 12:45:19.489474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.525 [2024-04-16 12:45:19.489511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.525 [2024-04-16 12:45:19.489541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.525 [2024-04-16 12:45:19.489579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.525 [2024-04-16 12:45:19.489611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.525 [2024-04-16 12:45:19.489647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.525 [2024-04-16 12:45:19.489662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.489967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.489983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.490974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.490988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.526 [2024-04-16 12:45:19.491004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.526 [2024-04-16 12:45:19.491017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.491380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.491466] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde2f60 was disconnected and freed. reset controller. 00:17:20.527 [2024-04-16 12:45:19.494226] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.527 [2024-04-16 12:45:19.494280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:20.527 [2024-04-16 12:45:19.494308] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:20.527 [2024-04-16 12:45:19.494335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf20e0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.494358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc6040 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.495933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:20.527 [2024-04-16 12:45:19.496050] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.527 [2024-04-16 12:45:19.497134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.497337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.497364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6040 with addr=10.0.0.2, port=4420 00:17:20.527 [2024-04-16 12:45:19.497382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6040 is same with the state(5) to be set 00:17:20.527 [2024-04-16 12:45:19.497541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.497707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.497732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf20e0 with addr=10.0.0.2, port=4420 00:17:20.527 [2024-04-16 12:45:19.497748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf20e0 is same with the state(5) to be set 00:17:20.527 [2024-04-16 12:45:19.497899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.498078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.498103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b08f0 with addr=10.0.0.2, port=4420 00:17:20.527 [2024-04-16 12:45:19.498118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08f0 is same with the state(5) to be set 00:17:20.527 [2024-04-16 12:45:19.498482] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.527 [2024-04-16 12:45:19.498559] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.527 [2024-04-16 12:45:19.498640] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.527 [2024-04-16 12:45:19.498713] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.527 [2024-04-16 12:45:19.498787] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:20.527 [2024-04-16 12:45:19.498827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc6040 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.498853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf20e0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.498872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b08f0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.498893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06f20 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.498930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8c4c0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.498963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8d2d0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.498994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd07d0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.499023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd062e0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.499053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd1130 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.499083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe04e60 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.499265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:20.527 [2024-04-16 12:45:19.499289] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:20.527 [2024-04-16 12:45:19.499305] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:20.527 [2024-04-16 12:45:19.499326] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:20.527 [2024-04-16 12:45:19.499340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:20.527 [2024-04-16 12:45:19.499358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:20.527 [2024-04-16 12:45:19.499377] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.527 [2024-04-16 12:45:19.499390] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:20.527 [2024-04-16 12:45:19.499403] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.527 [2024-04-16 12:45:19.499470] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.527 [2024-04-16 12:45:19.499490] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.527 [2024-04-16 12:45:19.499502] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.527 [2024-04-16 12:45:19.506121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:20.527 [2024-04-16 12:45:19.506242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:20.527 [2024-04-16 12:45:19.506265] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:20.527 [2024-04-16 12:45:19.506513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.506685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.506711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b08f0 with addr=10.0.0.2, port=4420 00:17:20.527 [2024-04-16 12:45:19.506730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08f0 is same with the state(5) to be set 00:17:20.527 [2024-04-16 12:45:19.506935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.507097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.507122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf20e0 with addr=10.0.0.2, port=4420 00:17:20.527 [2024-04-16 12:45:19.507138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf20e0 is same with the state(5) to be set 00:17:20.527 [2024-04-16 12:45:19.507294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.507461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.527 [2024-04-16 12:45:19.507486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6040 with addr=10.0.0.2, port=4420 00:17:20.527 [2024-04-16 12:45:19.507503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6040 is same with the state(5) to be set 00:17:20.527 [2024-04-16 12:45:19.507526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b08f0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.507594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf20e0 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.507618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc6040 (9): Bad file descriptor 00:17:20.527 [2024-04-16 12:45:19.507634] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.527 [2024-04-16 12:45:19.507648] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:20.527 [2024-04-16 12:45:19.507663] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.527 [2024-04-16 12:45:19.507739] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.527 [2024-04-16 12:45:19.507761] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:20.527 [2024-04-16 12:45:19.507775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:20.527 [2024-04-16 12:45:19.507799] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:20.527 [2024-04-16 12:45:19.507819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:20.527 [2024-04-16 12:45:19.507832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:20.527 [2024-04-16 12:45:19.507844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:20.527 [2024-04-16 12:45:19.507899] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.527 [2024-04-16 12:45:19.507916] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.527 [2024-04-16 12:45:19.509023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.509049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.509080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.509095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.509112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.509126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.509141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.509155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.527 [2024-04-16 12:45:19.509171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.527 [2024-04-16 12:45:19.509184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.509983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.509997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.528 [2024-04-16 12:45:19.510494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.528 [2024-04-16 12:45:19.510508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.510965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.510979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde40b0 is same with the state(5) to be set 00:17:20.529 [2024-04-16 12:45:19.512267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.512979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.512992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.529 [2024-04-16 12:45:19.513260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.529 [2024-04-16 12:45:19.513275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.513980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.513994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.514010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.514024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.514040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.514061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.514078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.514092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.514108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.514122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.514137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.514151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.514167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.514181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.514196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.514210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.514225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe5c0 is same with the state(5) to be set 00:17:20.530 [2024-04-16 12:45:19.515469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.515980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.515995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.530 [2024-04-16 12:45:19.516009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.530 [2024-04-16 12:45:19.516024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.516975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.516988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.517385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.517400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbfa30 is same with the state(5) to be set 00:17:20.531 [2024-04-16 12:45:19.518690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.518714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.518736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.518752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.518769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.531 [2024-04-16 12:45:19.518784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.531 [2024-04-16 12:45:19.518800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.518814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.518830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.518844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.518865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.518880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.518896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.518910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.518926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.518941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.518957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.518972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.518988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.519975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.519990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.520007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.520023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.520037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.520052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.520066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.520081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.520095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.520110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.520124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.532 [2024-04-16 12:45:19.520139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.532 [2024-04-16 12:45:19.520152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.520629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.520643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde59b0 is same with the state(5) to be set 00:17:20.533 [2024-04-16 12:45:19.521884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.521907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.521928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.521943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.521959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.521973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.521993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.533 [2024-04-16 12:45:19.522950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.533 [2024-04-16 12:45:19.522963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.522980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.522994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.523811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.523826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde6e20 is same with the state(5) to be set 00:17:20.534 [2024-04-16 12:45:19.525066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.534 [2024-04-16 12:45:19.525798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.534 [2024-04-16 12:45:19.525813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.525827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.525841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.525855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.525870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.525887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.525903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.525917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.525932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.525945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.525961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.525974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.525990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.526970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.526987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b480 is same with the state(5) to be set 00:17:20.535 [2024-04-16 12:45:19.529257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.529282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.529305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.529320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.529336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.529350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.529366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.529380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.529396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.535 [2024-04-16 12:45:19.529410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.535 [2024-04-16 12:45:19.529426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.529981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.529997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.530980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.530994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.531010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.531024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.531040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.531054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.531069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.531083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.531099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.536 [2024-04-16 12:45:19.531112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.536 [2024-04-16 12:45:19.531128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.537 [2024-04-16 12:45:19.531142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.537 [2024-04-16 12:45:19.531161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.537 [2024-04-16 12:45:19.531176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.537 [2024-04-16 12:45:19.531191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.537 [2024-04-16 12:45:19.531205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.537 [2024-04-16 12:45:19.531220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde8770 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.532789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:20.537 [2024-04-16 12:45:19.532820] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:20.537 [2024-04-16 12:45:19.532838] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:20.537 [2024-04-16 12:45:19.532856] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:20.537 [2024-04-16 12:45:19.532968] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.537 [2024-04-16 12:45:19.532998] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.537 [2024-04-16 12:45:19.533020] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.537 [2024-04-16 12:45:19.533124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:20.537 [2024-04-16 12:45:19.533148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:20.537 task offset: 30720 on job bdev=Nvme3n1 fails 00:17:20.537 00:17:20.537 Latency(us) 00:17:20.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.537 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme1n1 ended in about 0.90 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme1n1 : 0.90 212.90 13.31 70.97 0.00 222891.24 11893.57 259425.47 00:17:20.537 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme2n1 ended in about 0.92 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme2n1 : 0.92 144.81 9.05 69.68 0.00 289157.05 20291.89 257872.02 00:17:20.537 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme3n1 ended in about 0.90 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme3n1 : 0.90 213.56 13.35 71.19 0.00 213051.35 23592.96 250104.79 00:17:20.537 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme4n1 ended in about 0.90 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme4n1 : 0.90 213.30 13.33 71.10 0.00 208756.81 19126.80 256318.58 00:17:20.537 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme5n1 ended in about 0.92 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme5n1 : 0.92 138.88 8.68 69.44 0.00 279501.43 23010.42 253211.69 00:17:20.537 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme6n1 ended in about 0.92 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme6n1 : 0.92 138.41 8.65 69.20 0.00 274505.58 20097.71 260978.92 00:17:20.537 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme7n1 ended in about 0.93 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme7n1 : 0.93 137.93 8.62 68.96 0.00 269479.63 21068.61 259425.47 00:17:20.537 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme8n1 ended in about 0.93 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme8n1 : 0.93 137.46 8.59 68.73 0.00 264541.68 23204.60 285834.05 00:17:20.537 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme9n1 ended in about 0.93 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme9n1 : 0.93 136.99 8.56 68.50 0.00 259672.75 19612.25 295154.73 00:17:20.537 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.537 Job: Nvme10n1 ended in about 0.94 seconds with error 00:17:20.537 Verification LBA range: start 0x0 length 0x400 00:17:20.537 Nvme10n1 : 0.94 136.37 8.52 68.19 0.00 255207.60 32234.00 281173.71 00:17:20.537 =================================================================================================================== 00:17:20.537 Total : 1610.61 100.66 695.96 0.00 250243.48 11893.57 295154.73 00:17:20.537 [2024-04-16 12:45:19.559598] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:20.537 [2024-04-16 12:45:19.559679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:20.537 [2024-04-16 12:45:19.559993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.560162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.560190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd07d0 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.560209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd07d0 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.560376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.560560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.560593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8c4c0 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.560610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c4c0 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.560776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.560936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.560961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd062e0 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.560977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd062e0 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.561152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.561311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.561336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe04e60 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.561352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04e60 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.563302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:20.537 [2024-04-16 12:45:19.563333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:20.537 [2024-04-16 12:45:19.563578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.563737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.563764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd1130 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.563789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd1130 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.563944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.564127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.564152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe06f20 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.564168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe06f20 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.564351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.564529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.564554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8d2d0 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.564578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8d2d0 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.564605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd07d0 (9): Bad file descriptor 00:17:20.537 [2024-04-16 12:45:19.564628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8c4c0 (9): Bad file descriptor 00:17:20.537 [2024-04-16 12:45:19.564646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd062e0 (9): Bad file descriptor 00:17:20.537 [2024-04-16 12:45:19.564663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe04e60 (9): Bad file descriptor 00:17:20.537 [2024-04-16 12:45:19.564717] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.537 [2024-04-16 12:45:19.564744] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.537 [2024-04-16 12:45:19.564765] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.537 [2024-04-16 12:45:19.564785] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.537 [2024-04-16 12:45:19.564803] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:20.537 [2024-04-16 12:45:19.564891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:20.537 [2024-04-16 12:45:19.565104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.565276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.565302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b08f0 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.565319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b08f0 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.565471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.565652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.565678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6040 with addr=10.0.0.2, port=4420 00:17:20.537 [2024-04-16 12:45:19.565694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6040 is same with the state(5) to be set 00:17:20.537 [2024-04-16 12:45:19.565713] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd1130 (9): Bad file descriptor 00:17:20.537 [2024-04-16 12:45:19.565731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe06f20 (9): Bad file descriptor 00:17:20.537 [2024-04-16 12:45:19.565749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8d2d0 (9): Bad file descriptor 00:17:20.537 [2024-04-16 12:45:19.565772] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:20.537 [2024-04-16 12:45:19.565786] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:20.537 [2024-04-16 12:45:19.565801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:20.537 [2024-04-16 12:45:19.565821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:20.537 [2024-04-16 12:45:19.565835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:20.537 [2024-04-16 12:45:19.565847] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:20.537 [2024-04-16 12:45:19.565863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:17:20.537 [2024-04-16 12:45:19.565877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:17:20.537 [2024-04-16 12:45:19.565889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:20.537 [2024-04-16 12:45:19.565906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:17:20.537 [2024-04-16 12:45:19.565920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:17:20.537 [2024-04-16 12:45:19.565932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:20.537 [2024-04-16 12:45:19.566035] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.537 [2024-04-16 12:45:19.566055] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.537 [2024-04-16 12:45:19.566067] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.537 [2024-04-16 12:45:19.566079] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.537 [2024-04-16 12:45:19.566256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.537 [2024-04-16 12:45:19.566415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.795 [2024-04-16 12:45:19.566439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf20e0 with addr=10.0.0.2, port=4420 00:17:20.795 [2024-04-16 12:45:19.566454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf20e0 is same with the state(5) to be set 00:17:20.795 [2024-04-16 12:45:19.566473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b08f0 (9): Bad file descriptor 00:17:20.795 [2024-04-16 12:45:19.566492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc6040 (9): Bad file descriptor 00:17:20.795 [2024-04-16 12:45:19.566508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:17:20.795 [2024-04-16 12:45:19.566521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:17:20.795 [2024-04-16 12:45:19.566533] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:20.795 [2024-04-16 12:45:19.566550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:20.795 [2024-04-16 12:45:19.566570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:20.795 [2024-04-16 12:45:19.566584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:20.795 [2024-04-16 12:45:19.566600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:20.795 [2024-04-16 12:45:19.566613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:20.795 [2024-04-16 12:45:19.566631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:20.795 [2024-04-16 12:45:19.566671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.795 [2024-04-16 12:45:19.566689] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.795 [2024-04-16 12:45:19.566700] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.795 [2024-04-16 12:45:19.566716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf20e0 (9): Bad file descriptor 00:17:20.795 [2024-04-16 12:45:19.566733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.795 [2024-04-16 12:45:19.566745] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:20.795 [2024-04-16 12:45:19.566758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.795 [2024-04-16 12:45:19.566774] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:20.795 [2024-04-16 12:45:19.566788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:20.796 [2024-04-16 12:45:19.566800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:20.796 [2024-04-16 12:45:19.566836] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.796 [2024-04-16 12:45:19.566853] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:20.796 [2024-04-16 12:45:19.566865] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:20.796 [2024-04-16 12:45:19.566877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:20.796 [2024-04-16 12:45:19.566890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:20.796 [2024-04-16 12:45:19.566927] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.054 12:45:20 -- target/shutdown.sh@136 -- # nvmfpid= 00:17:21.054 12:45:20 -- target/shutdown.sh@139 -- # sleep 1 00:17:21.999 12:45:21 -- target/shutdown.sh@142 -- # kill -9 1208828 00:17:21.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1208828) - No such process 00:17:21.999 12:45:21 -- target/shutdown.sh@142 -- # true 00:17:22.000 12:45:21 -- target/shutdown.sh@144 -- # stoptarget 00:17:22.000 12:45:21 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:22.000 12:45:21 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:22.000 12:45:21 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:22.000 12:45:21 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:22.000 12:45:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:22.000 12:45:21 -- nvmf/common.sh@117 -- # sync 00:17:22.000 12:45:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.000 12:45:21 -- nvmf/common.sh@120 -- # set +e 00:17:22.000 12:45:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.000 12:45:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.000 rmmod nvme_tcp 00:17:22.000 rmmod nvme_fabrics 00:17:22.000 rmmod nvme_keyring 00:17:22.258 12:45:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.258 12:45:21 -- nvmf/common.sh@124 -- # set -e 00:17:22.258 12:45:21 -- nvmf/common.sh@125 -- # return 0 00:17:22.258 12:45:21 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:22.258 12:45:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:22.258 12:45:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:22.258 12:45:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:22.258 12:45:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.258 12:45:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.258 12:45:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.258 12:45:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.258 12:45:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.164 12:45:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.164 00:17:24.164 real 0m8.451s 00:17:24.164 user 0m22.653s 00:17:24.164 sys 0m1.534s 00:17:24.164 12:45:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.164 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:17:24.164 ************************************ 00:17:24.164 END TEST nvmf_shutdown_tc3 00:17:24.164 ************************************ 00:17:24.164 12:45:23 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:17:24.164 00:17:24.164 real 0m30.054s 00:17:24.164 user 1m24.946s 00:17:24.164 sys 0m6.930s 00:17:24.164 12:45:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.164 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:17:24.164 ************************************ 00:17:24.164 END TEST nvmf_shutdown 00:17:24.164 ************************************ 00:17:24.164 12:45:23 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:17:24.164 12:45:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:24.164 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:17:24.164 12:45:23 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:17:24.164 12:45:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:24.164 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:17:24.164 12:45:23 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:17:24.164 12:45:23 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:24.164 12:45:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:24.164 12:45:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.164 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:17:24.423 ************************************ 00:17:24.423 START TEST nvmf_multicontroller 00:17:24.423 ************************************ 00:17:24.423 12:45:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:24.423 * Looking for test storage... 00:17:24.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:24.423 12:45:23 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.423 12:45:23 -- nvmf/common.sh@7 -- # uname -s 00:17:24.423 12:45:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.423 12:45:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.423 12:45:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.423 12:45:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.423 12:45:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.423 12:45:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.423 12:45:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.423 12:45:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.423 12:45:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.423 12:45:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.423 12:45:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:24.423 12:45:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:24.423 12:45:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.423 12:45:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.423 12:45:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.423 12:45:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.423 12:45:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.423 12:45:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.423 12:45:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.423 12:45:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.423 12:45:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.423 12:45:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.423 12:45:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.423 12:45:23 -- paths/export.sh@5 -- # export PATH 00:17:24.423 12:45:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.423 12:45:23 -- nvmf/common.sh@47 -- # : 0 00:17:24.423 12:45:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.423 12:45:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.423 12:45:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.423 12:45:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.423 12:45:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.423 12:45:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.423 12:45:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.423 12:45:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.423 12:45:23 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:24.423 12:45:23 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:24.423 12:45:23 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:24.423 12:45:23 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:24.423 12:45:23 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.423 12:45:23 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:24.423 12:45:23 -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:24.423 12:45:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:24.423 12:45:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.423 12:45:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:24.423 12:45:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:24.423 12:45:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:24.423 12:45:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.423 12:45:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.423 12:45:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.423 12:45:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:24.423 12:45:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:24.423 12:45:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.423 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:17:26.959 12:45:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:26.959 12:45:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.959 12:45:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.959 12:45:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.959 12:45:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.959 12:45:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.959 12:45:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.959 12:45:25 -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.959 12:45:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.959 12:45:25 -- nvmf/common.sh@296 -- # e810=() 00:17:26.959 12:45:25 -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.959 12:45:25 -- nvmf/common.sh@297 -- # x722=() 00:17:26.959 12:45:25 -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.959 12:45:25 -- nvmf/common.sh@298 -- # mlx=() 00:17:26.959 12:45:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.959 12:45:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.959 12:45:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.959 12:45:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.959 12:45:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.959 12:45:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.959 12:45:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:26.959 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:26.959 12:45:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.959 12:45:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:26.959 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:26.959 12:45:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.959 12:45:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.959 12:45:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.959 12:45:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.959 12:45:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:26.959 12:45:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.959 12:45:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:26.959 Found net devices under 0000:82:00.0: cvl_0_0 00:17:26.959 12:45:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.959 12:45:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.959 12:45:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.959 12:45:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:26.960 12:45:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.960 12:45:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:26.960 Found net devices under 0000:82:00.1: cvl_0_1 00:17:26.960 12:45:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.960 12:45:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:26.960 12:45:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:26.960 12:45:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:26.960 12:45:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:26.960 12:45:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:26.960 12:45:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.960 12:45:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.960 12:45:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.960 12:45:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.960 12:45:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.960 12:45:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.960 12:45:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.960 12:45:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.960 12:45:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.960 12:45:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.960 12:45:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.960 12:45:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.960 12:45:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.960 12:45:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.960 12:45:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.960 12:45:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.960 12:45:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.233 12:45:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.233 12:45:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.233 12:45:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:27.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:17:27.233 00:17:27.233 --- 10.0.0.2 ping statistics --- 00:17:27.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.233 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:17:27.233 12:45:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:27.233 00:17:27.233 --- 10.0.0.1 ping statistics --- 00:17:27.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.233 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:27.233 12:45:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.233 12:45:26 -- nvmf/common.sh@411 -- # return 0 00:17:27.233 12:45:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:27.233 12:45:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.233 12:45:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:27.233 12:45:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:27.233 12:45:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.233 12:45:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:27.233 12:45:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:27.233 12:45:26 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:27.233 12:45:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:27.233 12:45:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:27.233 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:27.233 12:45:26 -- nvmf/common.sh@470 -- # nvmfpid=1211763 00:17:27.233 12:45:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:27.233 12:45:26 -- nvmf/common.sh@471 -- # waitforlisten 1211763 00:17:27.233 12:45:26 -- common/autotest_common.sh@817 -- # '[' -z 1211763 ']' 00:17:27.233 12:45:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.233 12:45:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.233 12:45:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.234 12:45:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.234 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:27.234 [2024-04-16 12:45:26.148397] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:27.234 [2024-04-16 12:45:26.148493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.234 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.234 [2024-04-16 12:45:26.231822] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:27.495 [2024-04-16 12:45:26.341522] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.495 [2024-04-16 12:45:26.341590] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.495 [2024-04-16 12:45:26.341621] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.495 [2024-04-16 12:45:26.341633] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.495 [2024-04-16 12:45:26.341651] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.495 [2024-04-16 12:45:26.344587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.495 [2024-04-16 12:45:26.344632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.495 [2024-04-16 12:45:26.344636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.060 12:45:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:28.060 12:45:27 -- common/autotest_common.sh@850 -- # return 0 00:17:28.060 12:45:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:28.060 12:45:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:28.060 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 12:45:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.317 12:45:27 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.317 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.317 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 [2024-04-16 12:45:27.145331] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.317 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.317 12:45:27 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.317 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.317 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 Malloc0 00:17:28.317 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.317 12:45:27 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.317 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.317 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.317 12:45:27 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.317 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.317 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.317 12:45:27 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.318 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.318 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.318 [2024-04-16 12:45:27.205591] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.318 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.318 12:45:27 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:28.318 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.318 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.318 [2024-04-16 12:45:27.213436] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:28.318 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.318 12:45:27 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:28.318 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.318 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.318 Malloc1 00:17:28.318 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.318 12:45:27 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:28.318 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.318 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.318 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.318 12:45:27 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:28.318 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.318 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.318 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.318 12:45:27 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:28.318 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.318 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.318 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.318 12:45:27 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:28.318 12:45:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.318 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.318 12:45:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.318 12:45:27 -- host/multicontroller.sh@44 -- # bdevperf_pid=1211924 00:17:28.318 12:45:27 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:28.318 12:45:27 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:28.318 12:45:27 -- host/multicontroller.sh@47 -- # waitforlisten 1211924 /var/tmp/bdevperf.sock 00:17:28.318 12:45:27 -- common/autotest_common.sh@817 -- # '[' -z 1211924 ']' 00:17:28.318 12:45:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.318 12:45:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:28.318 12:45:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.318 12:45:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:28.318 12:45:27 -- common/autotest_common.sh@10 -- # set +x 00:17:29.249 12:45:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:29.249 12:45:28 -- common/autotest_common.sh@850 -- # return 0 00:17:29.249 12:45:28 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:29.249 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.249 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 NVMe0n1 00:17:29.508 12:45:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.508 12:45:28 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:29.508 12:45:28 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:29.508 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.508 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 12:45:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.508 1 00:17:29.508 12:45:28 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:29.508 12:45:28 -- common/autotest_common.sh@638 -- # local es=0 00:17:29.508 12:45:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:29.508 12:45:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.508 12:45:28 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:29.508 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.508 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 request: 00:17:29.508 { 00:17:29.508 "name": "NVMe0", 00:17:29.508 "trtype": "tcp", 00:17:29.508 "traddr": "10.0.0.2", 00:17:29.508 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:29.508 "hostaddr": "10.0.0.2", 00:17:29.508 "hostsvcid": "60000", 00:17:29.508 "adrfam": "ipv4", 00:17:29.508 "trsvcid": "4420", 00:17:29.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.508 "method": "bdev_nvme_attach_controller", 00:17:29.508 "req_id": 1 00:17:29.508 } 00:17:29.508 Got JSON-RPC error response 00:17:29.508 response: 00:17:29.508 { 00:17:29.508 "code": -114, 00:17:29.508 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:29.508 } 00:17:29.508 12:45:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:29.508 12:45:28 -- common/autotest_common.sh@641 -- # es=1 00:17:29.508 12:45:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:29.508 12:45:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:29.508 12:45:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:29.508 12:45:28 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:29.508 12:45:28 -- common/autotest_common.sh@638 -- # local es=0 00:17:29.508 12:45:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:29.508 12:45:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.508 12:45:28 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:29.508 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.508 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 request: 00:17:29.508 { 00:17:29.508 "name": "NVMe0", 00:17:29.508 "trtype": "tcp", 00:17:29.508 "traddr": "10.0.0.2", 00:17:29.508 "hostaddr": "10.0.0.2", 00:17:29.508 "hostsvcid": "60000", 00:17:29.508 "adrfam": "ipv4", 00:17:29.508 "trsvcid": "4420", 00:17:29.508 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:29.508 "method": "bdev_nvme_attach_controller", 00:17:29.508 "req_id": 1 00:17:29.508 } 00:17:29.508 Got JSON-RPC error response 00:17:29.508 response: 00:17:29.508 { 00:17:29.508 "code": -114, 00:17:29.508 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:29.508 } 00:17:29.508 12:45:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:29.508 12:45:28 -- common/autotest_common.sh@641 -- # es=1 00:17:29.508 12:45:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:29.508 12:45:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:29.508 12:45:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:29.508 12:45:28 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:29.508 12:45:28 -- common/autotest_common.sh@638 -- # local es=0 00:17:29.508 12:45:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:29.508 12:45:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.508 12:45:28 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:29.508 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.508 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 request: 00:17:29.508 { 00:17:29.508 "name": "NVMe0", 00:17:29.508 "trtype": "tcp", 00:17:29.508 "traddr": "10.0.0.2", 00:17:29.508 "hostaddr": "10.0.0.2", 00:17:29.508 "hostsvcid": "60000", 00:17:29.508 "adrfam": "ipv4", 00:17:29.508 "trsvcid": "4420", 00:17:29.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.508 "multipath": "disable", 00:17:29.508 "method": "bdev_nvme_attach_controller", 00:17:29.508 "req_id": 1 00:17:29.508 } 00:17:29.508 Got JSON-RPC error response 00:17:29.508 response: 00:17:29.508 { 00:17:29.508 "code": -114, 00:17:29.508 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:17:29.508 } 00:17:29.508 12:45:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:29.508 12:45:28 -- common/autotest_common.sh@641 -- # es=1 00:17:29.508 12:45:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:29.508 12:45:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:29.508 12:45:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:29.508 12:45:28 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:29.508 12:45:28 -- common/autotest_common.sh@638 -- # local es=0 00:17:29.508 12:45:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:29.508 12:45:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.508 12:45:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:29.509 12:45:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:29.509 12:45:28 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:29.509 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.509 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.509 request: 00:17:29.509 { 00:17:29.509 "name": "NVMe0", 00:17:29.509 "trtype": "tcp", 00:17:29.509 "traddr": "10.0.0.2", 00:17:29.509 "hostaddr": "10.0.0.2", 00:17:29.509 "hostsvcid": "60000", 00:17:29.509 "adrfam": "ipv4", 00:17:29.509 "trsvcid": "4420", 00:17:29.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.509 "multipath": "failover", 00:17:29.509 "method": "bdev_nvme_attach_controller", 00:17:29.509 "req_id": 1 00:17:29.509 } 00:17:29.509 Got JSON-RPC error response 00:17:29.509 response: 00:17:29.509 { 00:17:29.509 "code": -114, 00:17:29.509 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:29.509 } 00:17:29.509 12:45:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:29.509 12:45:28 -- common/autotest_common.sh@641 -- # es=1 00:17:29.509 12:45:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:29.509 12:45:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:29.509 12:45:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:29.509 12:45:28 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:29.509 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.509 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.509 00:17:29.509 12:45:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.509 12:45:28 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:29.509 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.509 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.509 12:45:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.509 12:45:28 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:29.509 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.509 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.767 00:17:29.767 12:45:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.767 12:45:28 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:29.767 12:45:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.767 12:45:28 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:29.767 12:45:28 -- common/autotest_common.sh@10 -- # set +x 00:17:29.767 12:45:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.767 12:45:28 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:29.767 12:45:28 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:30.700 0 00:17:30.700 12:45:29 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:30.700 12:45:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.700 12:45:29 -- common/autotest_common.sh@10 -- # set +x 00:17:30.958 12:45:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.958 12:45:29 -- host/multicontroller.sh@100 -- # killprocess 1211924 00:17:30.958 12:45:29 -- common/autotest_common.sh@936 -- # '[' -z 1211924 ']' 00:17:30.958 12:45:29 -- common/autotest_common.sh@940 -- # kill -0 1211924 00:17:30.958 12:45:29 -- common/autotest_common.sh@941 -- # uname 00:17:30.958 12:45:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.958 12:45:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1211924 00:17:30.958 12:45:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:30.958 12:45:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:30.958 12:45:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1211924' 00:17:30.958 killing process with pid 1211924 00:17:30.958 12:45:29 -- common/autotest_common.sh@955 -- # kill 1211924 00:17:30.958 12:45:29 -- common/autotest_common.sh@960 -- # wait 1211924 00:17:31.216 12:45:30 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.217 12:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.217 12:45:30 -- common/autotest_common.sh@10 -- # set +x 00:17:31.217 12:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.217 12:45:30 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:31.217 12:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.217 12:45:30 -- common/autotest_common.sh@10 -- # set +x 00:17:31.217 12:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.217 12:45:30 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:31.217 12:45:30 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:31.217 12:45:30 -- common/autotest_common.sh@1598 -- # read -r file 00:17:31.217 12:45:30 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:17:31.217 12:45:30 -- common/autotest_common.sh@1597 -- # sort -u 00:17:31.217 12:45:30 -- common/autotest_common.sh@1599 -- # cat 00:17:31.217 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:31.217 [2024-04-16 12:45:27.316487] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:31.217 [2024-04-16 12:45:27.316588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211924 ] 00:17:31.217 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.217 [2024-04-16 12:45:27.384743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.217 [2024-04-16 12:45:27.491986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.217 [2024-04-16 12:45:28.607731] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bc52a415-fa3a-4c55-88ae-8ea705c9d7cb already exists 00:17:31.217 [2024-04-16 12:45:28.607771] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:bc52a415-fa3a-4c55-88ae-8ea705c9d7cb alias for bdev NVMe1n1 00:17:31.217 [2024-04-16 12:45:28.607806] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:31.217 Running I/O for 1 seconds... 00:17:31.217 00:17:31.217 Latency(us) 00:17:31.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.217 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:31.217 NVMe0n1 : 1.01 17150.30 66.99 0.00 0.00 7432.57 7087.60 16214.09 00:17:31.217 =================================================================================================================== 00:17:31.217 Total : 17150.30 66.99 0.00 0.00 7432.57 7087.60 16214.09 00:17:31.217 Received shutdown signal, test time was about 1.000000 seconds 00:17:31.217 00:17:31.217 Latency(us) 00:17:31.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.217 =================================================================================================================== 00:17:31.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.217 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:31.217 12:45:30 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:31.217 12:45:30 -- common/autotest_common.sh@1598 -- # read -r file 00:17:31.217 12:45:30 -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:31.217 12:45:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:31.217 12:45:30 -- nvmf/common.sh@117 -- # sync 00:17:31.217 12:45:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.217 12:45:30 -- nvmf/common.sh@120 -- # set +e 00:17:31.217 12:45:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.217 12:45:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.217 rmmod nvme_tcp 00:17:31.217 rmmod nvme_fabrics 00:17:31.217 rmmod nvme_keyring 00:17:31.217 12:45:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.217 12:45:30 -- nvmf/common.sh@124 -- # set -e 00:17:31.217 12:45:30 -- nvmf/common.sh@125 -- # return 0 00:17:31.217 12:45:30 -- nvmf/common.sh@478 -- # '[' -n 1211763 ']' 00:17:31.217 12:45:30 -- nvmf/common.sh@479 -- # killprocess 1211763 00:17:31.217 12:45:30 -- common/autotest_common.sh@936 -- # '[' -z 1211763 ']' 00:17:31.217 12:45:30 -- common/autotest_common.sh@940 -- # kill -0 1211763 00:17:31.217 12:45:30 -- common/autotest_common.sh@941 -- # uname 00:17:31.217 12:45:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.217 12:45:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1211763 00:17:31.217 12:45:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:31.217 12:45:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:31.217 12:45:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1211763' 00:17:31.217 killing process with pid 1211763 00:17:31.217 12:45:30 -- common/autotest_common.sh@955 -- # kill 1211763 00:17:31.217 12:45:30 -- common/autotest_common.sh@960 -- # wait 1211763 00:17:31.475 12:45:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:31.475 12:45:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:31.475 12:45:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:31.475 12:45:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.475 12:45:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:31.475 12:45:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.475 12:45:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.475 12:45:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.014 12:45:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:34.014 00:17:34.014 real 0m9.213s 00:17:34.014 user 0m16.299s 00:17:34.014 sys 0m2.780s 00:17:34.014 12:45:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:34.014 12:45:32 -- common/autotest_common.sh@10 -- # set +x 00:17:34.014 ************************************ 00:17:34.014 END TEST nvmf_multicontroller 00:17:34.014 ************************************ 00:17:34.014 12:45:32 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:34.014 12:45:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:34.014 12:45:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.014 12:45:32 -- common/autotest_common.sh@10 -- # set +x 00:17:34.014 ************************************ 00:17:34.014 START TEST nvmf_aer 00:17:34.014 ************************************ 00:17:34.014 12:45:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:34.014 * Looking for test storage... 00:17:34.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:34.014 12:45:32 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.014 12:45:32 -- nvmf/common.sh@7 -- # uname -s 00:17:34.014 12:45:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.014 12:45:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.014 12:45:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.014 12:45:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.014 12:45:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.014 12:45:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.014 12:45:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.014 12:45:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.014 12:45:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.014 12:45:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.014 12:45:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:34.014 12:45:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:34.014 12:45:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.014 12:45:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.014 12:45:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.014 12:45:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.014 12:45:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.014 12:45:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.014 12:45:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.014 12:45:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.014 12:45:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.014 12:45:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.014 12:45:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.014 12:45:32 -- paths/export.sh@5 -- # export PATH 00:17:34.015 12:45:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.015 12:45:32 -- nvmf/common.sh@47 -- # : 0 00:17:34.015 12:45:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.015 12:45:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.015 12:45:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.015 12:45:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.015 12:45:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.015 12:45:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.015 12:45:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.015 12:45:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.015 12:45:32 -- host/aer.sh@11 -- # nvmftestinit 00:17:34.015 12:45:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:34.015 12:45:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.015 12:45:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:34.015 12:45:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:34.015 12:45:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:34.015 12:45:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.015 12:45:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.015 12:45:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.015 12:45:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:34.015 12:45:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:34.015 12:45:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:34.015 12:45:32 -- common/autotest_common.sh@10 -- # set +x 00:17:36.542 12:45:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:36.542 12:45:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.542 12:45:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.542 12:45:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.542 12:45:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.542 12:45:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.542 12:45:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.542 12:45:35 -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.542 12:45:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.542 12:45:35 -- nvmf/common.sh@296 -- # e810=() 00:17:36.542 12:45:35 -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.542 12:45:35 -- nvmf/common.sh@297 -- # x722=() 00:17:36.542 12:45:35 -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.542 12:45:35 -- nvmf/common.sh@298 -- # mlx=() 00:17:36.542 12:45:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.542 12:45:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.542 12:45:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.542 12:45:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.542 12:45:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.542 12:45:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.542 12:45:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:36.542 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:36.542 12:45:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.542 12:45:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:36.542 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:36.542 12:45:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.542 12:45:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.543 12:45:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.543 12:45:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.543 12:45:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:36.543 12:45:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.543 12:45:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:36.543 Found net devices under 0000:82:00.0: cvl_0_0 00:17:36.543 12:45:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.543 12:45:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.543 12:45:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.543 12:45:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:36.543 12:45:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.543 12:45:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:36.543 Found net devices under 0000:82:00.1: cvl_0_1 00:17:36.543 12:45:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.543 12:45:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:36.543 12:45:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:36.543 12:45:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:36.543 12:45:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.543 12:45:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.543 12:45:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.543 12:45:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.543 12:45:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.543 12:45:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.543 12:45:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.543 12:45:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.543 12:45:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.543 12:45:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.543 12:45:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.543 12:45:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.543 12:45:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.543 12:45:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.543 12:45:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.543 12:45:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.543 12:45:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.543 12:45:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.543 12:45:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.543 12:45:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:17:36.543 00:17:36.543 --- 10.0.0.2 ping statistics --- 00:17:36.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.543 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:36.543 12:45:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:36.543 00:17:36.543 --- 10.0.0.1 ping statistics --- 00:17:36.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.543 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:36.543 12:45:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.543 12:45:35 -- nvmf/common.sh@411 -- # return 0 00:17:36.543 12:45:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:36.543 12:45:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.543 12:45:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:36.543 12:45:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.543 12:45:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:36.543 12:45:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:36.543 12:45:35 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:36.543 12:45:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:36.543 12:45:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:36.543 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.543 12:45:35 -- nvmf/common.sh@470 -- # nvmfpid=1214567 00:17:36.543 12:45:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:36.543 12:45:35 -- nvmf/common.sh@471 -- # waitforlisten 1214567 00:17:36.543 12:45:35 -- common/autotest_common.sh@817 -- # '[' -z 1214567 ']' 00:17:36.543 12:45:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.543 12:45:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:36.543 12:45:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.543 12:45:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:36.543 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.543 [2024-04-16 12:45:35.352650] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:36.543 [2024-04-16 12:45:35.352717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.543 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.543 [2024-04-16 12:45:35.424980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.543 [2024-04-16 12:45:35.533815] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.543 [2024-04-16 12:45:35.533883] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.543 [2024-04-16 12:45:35.533913] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.543 [2024-04-16 12:45:35.533925] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.543 [2024-04-16 12:45:35.533935] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.543 [2024-04-16 12:45:35.534016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.543 [2024-04-16 12:45:35.534049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.543 [2024-04-16 12:45:35.534109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.543 [2024-04-16 12:45:35.534111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.803 12:45:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.803 12:45:35 -- common/autotest_common.sh@850 -- # return 0 00:17:36.803 12:45:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:36.803 12:45:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.803 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.803 12:45:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.803 12:45:35 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.803 12:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.803 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.803 [2024-04-16 12:45:35.695430] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.803 12:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.803 12:45:35 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:36.803 12:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.803 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.803 Malloc0 00:17:36.803 12:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.803 12:45:35 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:36.803 12:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.803 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.803 12:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.803 12:45:35 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.803 12:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.803 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.803 12:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.803 12:45:35 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.803 12:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.803 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.803 [2024-04-16 12:45:35.749359] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.803 12:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.803 12:45:35 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:36.803 12:45:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.803 12:45:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.803 [2024-04-16 12:45:35.757058] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:36.803 [ 00:17:36.803 { 00:17:36.803 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:36.803 "subtype": "Discovery", 00:17:36.803 "listen_addresses": [], 00:17:36.803 "allow_any_host": true, 00:17:36.803 "hosts": [] 00:17:36.803 }, 00:17:36.803 { 00:17:36.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.803 "subtype": "NVMe", 00:17:36.803 "listen_addresses": [ 00:17:36.803 { 00:17:36.803 "transport": "TCP", 00:17:36.803 "trtype": "TCP", 00:17:36.803 "adrfam": "IPv4", 00:17:36.803 "traddr": "10.0.0.2", 00:17:36.803 "trsvcid": "4420" 00:17:36.803 } 00:17:36.803 ], 00:17:36.803 "allow_any_host": true, 00:17:36.803 "hosts": [], 00:17:36.803 "serial_number": "SPDK00000000000001", 00:17:36.803 "model_number": "SPDK bdev Controller", 00:17:36.803 "max_namespaces": 2, 00:17:36.803 "min_cntlid": 1, 00:17:36.803 "max_cntlid": 65519, 00:17:36.803 "namespaces": [ 00:17:36.803 { 00:17:36.803 "nsid": 1, 00:17:36.803 "bdev_name": "Malloc0", 00:17:36.803 "name": "Malloc0", 00:17:36.803 "nguid": "99FE96D96D654650BE9BA48AAC83323B", 00:17:36.803 "uuid": "99fe96d9-6d65-4650-be9b-a48aac83323b" 00:17:36.803 } 00:17:36.803 ] 00:17:36.803 } 00:17:36.803 ] 00:17:36.803 12:45:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.803 12:45:35 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:36.803 12:45:35 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:36.803 12:45:35 -- host/aer.sh@33 -- # aerpid=1214591 00:17:36.803 12:45:35 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:36.803 12:45:35 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:36.803 12:45:35 -- common/autotest_common.sh@1251 -- # local i=0 00:17:36.803 12:45:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:36.803 12:45:35 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:17:36.803 12:45:35 -- common/autotest_common.sh@1254 -- # i=1 00:17:36.803 12:45:35 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:36.803 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.068 12:45:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:37.068 12:45:35 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:17:37.068 12:45:35 -- common/autotest_common.sh@1254 -- # i=2 00:17:37.068 12:45:35 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:37.068 12:45:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:37.068 12:45:35 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:17:37.068 12:45:35 -- common/autotest_common.sh@1254 -- # i=3 00:17:37.068 12:45:35 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:37.068 12:45:36 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:37.068 12:45:36 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:37.068 12:45:36 -- common/autotest_common.sh@1262 -- # return 0 00:17:37.068 12:45:36 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:37.068 12:45:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.068 12:45:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.068 Malloc1 00:17:37.068 12:45:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.068 12:45:36 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:37.068 12:45:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.069 12:45:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.327 12:45:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.327 12:45:36 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:37.327 12:45:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.327 12:45:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.327 Asynchronous Event Request test 00:17:37.327 Attaching to 10.0.0.2 00:17:37.327 Attached to 10.0.0.2 00:17:37.327 Registering asynchronous event callbacks... 00:17:37.327 Starting namespace attribute notice tests for all controllers... 00:17:37.327 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:37.327 aer_cb - Changed Namespace 00:17:37.327 Cleaning up... 00:17:37.327 [ 00:17:37.327 { 00:17:37.327 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:37.327 "subtype": "Discovery", 00:17:37.327 "listen_addresses": [], 00:17:37.327 "allow_any_host": true, 00:17:37.327 "hosts": [] 00:17:37.327 }, 00:17:37.327 { 00:17:37.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.327 "subtype": "NVMe", 00:17:37.327 "listen_addresses": [ 00:17:37.327 { 00:17:37.327 "transport": "TCP", 00:17:37.327 "trtype": "TCP", 00:17:37.327 "adrfam": "IPv4", 00:17:37.327 "traddr": "10.0.0.2", 00:17:37.327 "trsvcid": "4420" 00:17:37.327 } 00:17:37.327 ], 00:17:37.327 "allow_any_host": true, 00:17:37.327 "hosts": [], 00:17:37.327 "serial_number": "SPDK00000000000001", 00:17:37.327 "model_number": "SPDK bdev Controller", 00:17:37.327 "max_namespaces": 2, 00:17:37.327 "min_cntlid": 1, 00:17:37.327 "max_cntlid": 65519, 00:17:37.327 "namespaces": [ 00:17:37.327 { 00:17:37.327 "nsid": 1, 00:17:37.327 "bdev_name": "Malloc0", 00:17:37.327 "name": "Malloc0", 00:17:37.327 "nguid": "99FE96D96D654650BE9BA48AAC83323B", 00:17:37.327 "uuid": "99fe96d9-6d65-4650-be9b-a48aac83323b" 00:17:37.327 }, 00:17:37.327 { 00:17:37.327 "nsid": 2, 00:17:37.327 "bdev_name": "Malloc1", 00:17:37.327 "name": "Malloc1", 00:17:37.327 "nguid": "767B8C7CF6204E91A11A16D67B54A4C6", 00:17:37.327 "uuid": "767b8c7c-f620-4e91-a11a-16d67b54a4c6" 00:17:37.327 } 00:17:37.327 ] 00:17:37.327 } 00:17:37.327 ] 00:17:37.327 12:45:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.327 12:45:36 -- host/aer.sh@43 -- # wait 1214591 00:17:37.327 12:45:36 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:37.327 12:45:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.327 12:45:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.327 12:45:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.327 12:45:36 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:37.327 12:45:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.327 12:45:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.327 12:45:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.327 12:45:36 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:37.327 12:45:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.327 12:45:36 -- common/autotest_common.sh@10 -- # set +x 00:17:37.327 12:45:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.327 12:45:36 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:37.327 12:45:36 -- host/aer.sh@51 -- # nvmftestfini 00:17:37.327 12:45:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:37.327 12:45:36 -- nvmf/common.sh@117 -- # sync 00:17:37.327 12:45:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.327 12:45:36 -- nvmf/common.sh@120 -- # set +e 00:17:37.327 12:45:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.327 12:45:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.327 rmmod nvme_tcp 00:17:37.327 rmmod nvme_fabrics 00:17:37.327 rmmod nvme_keyring 00:17:37.327 12:45:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.327 12:45:36 -- nvmf/common.sh@124 -- # set -e 00:17:37.327 12:45:36 -- nvmf/common.sh@125 -- # return 0 00:17:37.327 12:45:36 -- nvmf/common.sh@478 -- # '[' -n 1214567 ']' 00:17:37.327 12:45:36 -- nvmf/common.sh@479 -- # killprocess 1214567 00:17:37.327 12:45:36 -- common/autotest_common.sh@936 -- # '[' -z 1214567 ']' 00:17:37.327 12:45:36 -- common/autotest_common.sh@940 -- # kill -0 1214567 00:17:37.327 12:45:36 -- common/autotest_common.sh@941 -- # uname 00:17:37.327 12:45:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.327 12:45:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1214567 00:17:37.327 12:45:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:37.327 12:45:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:37.327 12:45:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1214567' 00:17:37.327 killing process with pid 1214567 00:17:37.327 12:45:36 -- common/autotest_common.sh@955 -- # kill 1214567 00:17:37.327 [2024-04-16 12:45:36.314836] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:37.327 12:45:36 -- common/autotest_common.sh@960 -- # wait 1214567 00:17:37.585 12:45:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:37.585 12:45:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:37.585 12:45:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:37.585 12:45:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.585 12:45:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.585 12:45:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.585 12:45:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.585 12:45:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.116 12:45:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.116 00:17:40.116 real 0m5.994s 00:17:40.116 user 0m4.873s 00:17:40.116 sys 0m2.260s 00:17:40.116 12:45:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:40.116 12:45:38 -- common/autotest_common.sh@10 -- # set +x 00:17:40.116 ************************************ 00:17:40.116 END TEST nvmf_aer 00:17:40.116 ************************************ 00:17:40.116 12:45:38 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:40.116 12:45:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:40.116 12:45:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:40.116 12:45:38 -- common/autotest_common.sh@10 -- # set +x 00:17:40.116 ************************************ 00:17:40.116 START TEST nvmf_async_init 00:17:40.116 ************************************ 00:17:40.116 12:45:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:40.116 * Looking for test storage... 00:17:40.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:40.116 12:45:38 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.116 12:45:38 -- nvmf/common.sh@7 -- # uname -s 00:17:40.116 12:45:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.116 12:45:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.116 12:45:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.116 12:45:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.116 12:45:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.116 12:45:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.116 12:45:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.116 12:45:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.116 12:45:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.116 12:45:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.116 12:45:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:40.116 12:45:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:40.116 12:45:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.116 12:45:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.116 12:45:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.116 12:45:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.116 12:45:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.116 12:45:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.116 12:45:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.116 12:45:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.116 12:45:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.116 12:45:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.116 12:45:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.116 12:45:38 -- paths/export.sh@5 -- # export PATH 00:17:40.116 12:45:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.116 12:45:38 -- nvmf/common.sh@47 -- # : 0 00:17:40.116 12:45:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.116 12:45:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.116 12:45:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.116 12:45:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.116 12:45:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.116 12:45:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.116 12:45:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.116 12:45:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.116 12:45:38 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:40.116 12:45:38 -- host/async_init.sh@14 -- # null_block_size=512 00:17:40.116 12:45:38 -- host/async_init.sh@15 -- # null_bdev=null0 00:17:40.116 12:45:38 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:40.116 12:45:38 -- host/async_init.sh@20 -- # uuidgen 00:17:40.116 12:45:38 -- host/async_init.sh@20 -- # tr -d - 00:17:40.116 12:45:38 -- host/async_init.sh@20 -- # nguid=a36f9e341057483cabdbce6a174d7246 00:17:40.116 12:45:38 -- host/async_init.sh@22 -- # nvmftestinit 00:17:40.116 12:45:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:40.116 12:45:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.116 12:45:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:40.116 12:45:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:40.116 12:45:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:40.116 12:45:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.116 12:45:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.116 12:45:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.116 12:45:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:40.116 12:45:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:40.116 12:45:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.116 12:45:38 -- common/autotest_common.sh@10 -- # set +x 00:17:42.645 12:45:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:42.645 12:45:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.645 12:45:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.645 12:45:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.645 12:45:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.645 12:45:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.645 12:45:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.645 12:45:41 -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.645 12:45:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.645 12:45:41 -- nvmf/common.sh@296 -- # e810=() 00:17:42.645 12:45:41 -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.645 12:45:41 -- nvmf/common.sh@297 -- # x722=() 00:17:42.645 12:45:41 -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.645 12:45:41 -- nvmf/common.sh@298 -- # mlx=() 00:17:42.645 12:45:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.645 12:45:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.645 12:45:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.645 12:45:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:42.645 12:45:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.645 12:45:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.645 12:45:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:42.645 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:42.645 12:45:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.645 12:45:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:42.645 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:42.645 12:45:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.645 12:45:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.645 12:45:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.645 12:45:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:42.645 12:45:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.645 12:45:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:42.645 Found net devices under 0000:82:00.0: cvl_0_0 00:17:42.645 12:45:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.645 12:45:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.645 12:45:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.645 12:45:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:42.645 12:45:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.645 12:45:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:42.645 Found net devices under 0000:82:00.1: cvl_0_1 00:17:42.645 12:45:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.645 12:45:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:42.645 12:45:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:42.645 12:45:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:42.645 12:45:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:42.645 12:45:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.645 12:45:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.645 12:45:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.645 12:45:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.646 12:45:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.646 12:45:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.646 12:45:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.646 12:45:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.646 12:45:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.646 12:45:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.646 12:45:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.646 12:45:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.646 12:45:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.646 12:45:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.646 12:45:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.646 12:45:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.646 12:45:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.646 12:45:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.646 12:45:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.646 12:45:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:17:42.646 00:17:42.646 --- 10.0.0.2 ping statistics --- 00:17:42.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.646 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:17:42.646 12:45:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:17:42.646 00:17:42.646 --- 10.0.0.1 ping statistics --- 00:17:42.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.646 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:17:42.646 12:45:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.646 12:45:41 -- nvmf/common.sh@411 -- # return 0 00:17:42.646 12:45:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:42.646 12:45:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.646 12:45:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:42.646 12:45:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:42.646 12:45:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.646 12:45:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:42.646 12:45:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:42.646 12:45:41 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:42.646 12:45:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:42.646 12:45:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:42.646 12:45:41 -- common/autotest_common.sh@10 -- # set +x 00:17:42.646 12:45:41 -- nvmf/common.sh@470 -- # nvmfpid=1216951 00:17:42.646 12:45:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:42.646 12:45:41 -- nvmf/common.sh@471 -- # waitforlisten 1216951 00:17:42.646 12:45:41 -- common/autotest_common.sh@817 -- # '[' -z 1216951 ']' 00:17:42.646 12:45:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.646 12:45:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.646 12:45:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.646 12:45:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.646 12:45:41 -- common/autotest_common.sh@10 -- # set +x 00:17:42.646 [2024-04-16 12:45:41.432127] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:42.646 [2024-04-16 12:45:41.432204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.646 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.646 [2024-04-16 12:45:41.512554] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.646 [2024-04-16 12:45:41.629721] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.646 [2024-04-16 12:45:41.629775] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.646 [2024-04-16 12:45:41.629790] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.646 [2024-04-16 12:45:41.629803] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.646 [2024-04-16 12:45:41.629815] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.646 [2024-04-16 12:45:41.629860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.581 12:45:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.581 12:45:42 -- common/autotest_common.sh@850 -- # return 0 00:17:43.581 12:45:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:43.581 12:45:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.581 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.581 12:45:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.581 12:45:42 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:43.581 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.581 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.581 [2024-04-16 12:45:42.398793] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.581 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.581 12:45:42 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:43.581 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.581 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.581 null0 00:17:43.581 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.581 12:45:42 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:43.581 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.581 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.581 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.581 12:45:42 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:43.581 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.581 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.581 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.581 12:45:42 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a36f9e341057483cabdbce6a174d7246 00:17:43.581 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.581 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.581 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.581 12:45:42 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:43.581 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.581 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.581 [2024-04-16 12:45:42.439057] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.581 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.581 12:45:42 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:43.581 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.581 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 nvme0n1 00:17:43.840 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.840 12:45:42 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:43.840 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 [ 00:17:43.840 { 00:17:43.840 "name": "nvme0n1", 00:17:43.840 "aliases": [ 00:17:43.840 "a36f9e34-1057-483c-abdb-ce6a174d7246" 00:17:43.840 ], 00:17:43.840 "product_name": "NVMe disk", 00:17:43.840 "block_size": 512, 00:17:43.840 "num_blocks": 2097152, 00:17:43.840 "uuid": "a36f9e34-1057-483c-abdb-ce6a174d7246", 00:17:43.840 "assigned_rate_limits": { 00:17:43.840 "rw_ios_per_sec": 0, 00:17:43.840 "rw_mbytes_per_sec": 0, 00:17:43.840 "r_mbytes_per_sec": 0, 00:17:43.840 "w_mbytes_per_sec": 0 00:17:43.840 }, 00:17:43.840 "claimed": false, 00:17:43.840 "zoned": false, 00:17:43.840 "supported_io_types": { 00:17:43.840 "read": true, 00:17:43.840 "write": true, 00:17:43.840 "unmap": false, 00:17:43.840 "write_zeroes": true, 00:17:43.840 "flush": true, 00:17:43.840 "reset": true, 00:17:43.840 "compare": true, 00:17:43.840 "compare_and_write": true, 00:17:43.840 "abort": true, 00:17:43.840 "nvme_admin": true, 00:17:43.840 "nvme_io": true 00:17:43.840 }, 00:17:43.840 "memory_domains": [ 00:17:43.840 { 00:17:43.840 "dma_device_id": "system", 00:17:43.840 "dma_device_type": 1 00:17:43.840 } 00:17:43.840 ], 00:17:43.840 "driver_specific": { 00:17:43.840 "nvme": [ 00:17:43.840 { 00:17:43.840 "trid": { 00:17:43.840 "trtype": "TCP", 00:17:43.840 "adrfam": "IPv4", 00:17:43.840 "traddr": "10.0.0.2", 00:17:43.840 "trsvcid": "4420", 00:17:43.840 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.840 }, 00:17:43.840 "ctrlr_data": { 00:17:43.840 "cntlid": 1, 00:17:43.840 "vendor_id": "0x8086", 00:17:43.840 "model_number": "SPDK bdev Controller", 00:17:43.840 "serial_number": "00000000000000000000", 00:17:43.840 "firmware_revision": "24.05", 00:17:43.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.840 "oacs": { 00:17:43.840 "security": 0, 00:17:43.840 "format": 0, 00:17:43.840 "firmware": 0, 00:17:43.840 "ns_manage": 0 00:17:43.840 }, 00:17:43.840 "multi_ctrlr": true, 00:17:43.840 "ana_reporting": false 00:17:43.840 }, 00:17:43.840 "vs": { 00:17:43.840 "nvme_version": "1.3" 00:17:43.840 }, 00:17:43.840 "ns_data": { 00:17:43.840 "id": 1, 00:17:43.840 "can_share": true 00:17:43.840 } 00:17:43.840 } 00:17:43.840 ], 00:17:43.840 "mp_policy": "active_passive" 00:17:43.840 } 00:17:43.840 } 00:17:43.840 ] 00:17:43.840 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.840 12:45:42 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:43.840 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 [2024-04-16 12:45:42.691687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:43.840 [2024-04-16 12:45:42.691775] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadc890 (9): Bad file descriptor 00:17:43.840 [2024-04-16 12:45:42.833719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:43.840 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.840 12:45:42 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:43.840 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 [ 00:17:43.840 { 00:17:43.840 "name": "nvme0n1", 00:17:43.840 "aliases": [ 00:17:43.840 "a36f9e34-1057-483c-abdb-ce6a174d7246" 00:17:43.840 ], 00:17:43.840 "product_name": "NVMe disk", 00:17:43.840 "block_size": 512, 00:17:43.840 "num_blocks": 2097152, 00:17:43.840 "uuid": "a36f9e34-1057-483c-abdb-ce6a174d7246", 00:17:43.840 "assigned_rate_limits": { 00:17:43.840 "rw_ios_per_sec": 0, 00:17:43.840 "rw_mbytes_per_sec": 0, 00:17:43.840 "r_mbytes_per_sec": 0, 00:17:43.840 "w_mbytes_per_sec": 0 00:17:43.840 }, 00:17:43.840 "claimed": false, 00:17:43.840 "zoned": false, 00:17:43.840 "supported_io_types": { 00:17:43.840 "read": true, 00:17:43.840 "write": true, 00:17:43.840 "unmap": false, 00:17:43.840 "write_zeroes": true, 00:17:43.840 "flush": true, 00:17:43.840 "reset": true, 00:17:43.840 "compare": true, 00:17:43.840 "compare_and_write": true, 00:17:43.840 "abort": true, 00:17:43.840 "nvme_admin": true, 00:17:43.840 "nvme_io": true 00:17:43.840 }, 00:17:43.840 "memory_domains": [ 00:17:43.840 { 00:17:43.840 "dma_device_id": "system", 00:17:43.840 "dma_device_type": 1 00:17:43.840 } 00:17:43.840 ], 00:17:43.840 "driver_specific": { 00:17:43.840 "nvme": [ 00:17:43.840 { 00:17:43.840 "trid": { 00:17:43.840 "trtype": "TCP", 00:17:43.840 "adrfam": "IPv4", 00:17:43.840 "traddr": "10.0.0.2", 00:17:43.840 "trsvcid": "4420", 00:17:43.840 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.840 }, 00:17:43.840 "ctrlr_data": { 00:17:43.840 "cntlid": 2, 00:17:43.840 "vendor_id": "0x8086", 00:17:43.840 "model_number": "SPDK bdev Controller", 00:17:43.840 "serial_number": "00000000000000000000", 00:17:43.840 "firmware_revision": "24.05", 00:17:43.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.840 "oacs": { 00:17:43.840 "security": 0, 00:17:43.840 "format": 0, 00:17:43.840 "firmware": 0, 00:17:43.840 "ns_manage": 0 00:17:43.840 }, 00:17:43.840 "multi_ctrlr": true, 00:17:43.840 "ana_reporting": false 00:17:43.840 }, 00:17:43.840 "vs": { 00:17:43.840 "nvme_version": "1.3" 00:17:43.840 }, 00:17:43.840 "ns_data": { 00:17:43.840 "id": 1, 00:17:43.840 "can_share": true 00:17:43.840 } 00:17:43.840 } 00:17:43.840 ], 00:17:43.840 "mp_policy": "active_passive" 00:17:43.840 } 00:17:43.840 } 00:17:43.840 ] 00:17:43.840 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.840 12:45:42 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.840 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.840 12:45:42 -- host/async_init.sh@53 -- # mktemp 00:17:43.840 12:45:42 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5qTqXfZLXK 00:17:43.840 12:45:42 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:43.840 12:45:42 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5qTqXfZLXK 00:17:43.840 12:45:42 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.840 12:45:42 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:43.840 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 [2024-04-16 12:45:42.884324] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.840 [2024-04-16 12:45:42.884451] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:43.840 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.840 12:45:42 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5qTqXfZLXK 00:17:43.840 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 [2024-04-16 12:45:42.892353] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:43.840 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.840 12:45:42 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5qTqXfZLXK 00:17:43.840 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.840 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 [2024-04-16 12:45:42.900366] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.840 [2024-04-16 12:45:42.900426] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:44.098 nvme0n1 00:17:44.098 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.098 12:45:42 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:44.098 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.098 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:44.098 [ 00:17:44.098 { 00:17:44.098 "name": "nvme0n1", 00:17:44.098 "aliases": [ 00:17:44.098 "a36f9e34-1057-483c-abdb-ce6a174d7246" 00:17:44.098 ], 00:17:44.098 "product_name": "NVMe disk", 00:17:44.098 "block_size": 512, 00:17:44.098 "num_blocks": 2097152, 00:17:44.098 "uuid": "a36f9e34-1057-483c-abdb-ce6a174d7246", 00:17:44.098 "assigned_rate_limits": { 00:17:44.098 "rw_ios_per_sec": 0, 00:17:44.098 "rw_mbytes_per_sec": 0, 00:17:44.098 "r_mbytes_per_sec": 0, 00:17:44.099 "w_mbytes_per_sec": 0 00:17:44.099 }, 00:17:44.099 "claimed": false, 00:17:44.099 "zoned": false, 00:17:44.099 "supported_io_types": { 00:17:44.099 "read": true, 00:17:44.099 "write": true, 00:17:44.099 "unmap": false, 00:17:44.099 "write_zeroes": true, 00:17:44.099 "flush": true, 00:17:44.099 "reset": true, 00:17:44.099 "compare": true, 00:17:44.099 "compare_and_write": true, 00:17:44.099 "abort": true, 00:17:44.099 "nvme_admin": true, 00:17:44.099 "nvme_io": true 00:17:44.099 }, 00:17:44.099 "memory_domains": [ 00:17:44.099 { 00:17:44.099 "dma_device_id": "system", 00:17:44.099 "dma_device_type": 1 00:17:44.099 } 00:17:44.099 ], 00:17:44.099 "driver_specific": { 00:17:44.099 "nvme": [ 00:17:44.099 { 00:17:44.099 "trid": { 00:17:44.099 "trtype": "TCP", 00:17:44.099 "adrfam": "IPv4", 00:17:44.099 "traddr": "10.0.0.2", 00:17:44.099 "trsvcid": "4421", 00:17:44.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:44.099 }, 00:17:44.099 "ctrlr_data": { 00:17:44.099 "cntlid": 3, 00:17:44.099 "vendor_id": "0x8086", 00:17:44.099 "model_number": "SPDK bdev Controller", 00:17:44.099 "serial_number": "00000000000000000000", 00:17:44.099 "firmware_revision": "24.05", 00:17:44.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:44.099 "oacs": { 00:17:44.099 "security": 0, 00:17:44.099 "format": 0, 00:17:44.099 "firmware": 0, 00:17:44.099 "ns_manage": 0 00:17:44.099 }, 00:17:44.099 "multi_ctrlr": true, 00:17:44.099 "ana_reporting": false 00:17:44.099 }, 00:17:44.099 "vs": { 00:17:44.099 "nvme_version": "1.3" 00:17:44.099 }, 00:17:44.099 "ns_data": { 00:17:44.099 "id": 1, 00:17:44.099 "can_share": true 00:17:44.099 } 00:17:44.099 } 00:17:44.099 ], 00:17:44.099 "mp_policy": "active_passive" 00:17:44.099 } 00:17:44.099 } 00:17:44.099 ] 00:17:44.099 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.099 12:45:42 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.099 12:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.099 12:45:42 -- common/autotest_common.sh@10 -- # set +x 00:17:44.099 12:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.099 12:45:42 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5qTqXfZLXK 00:17:44.099 12:45:43 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:44.099 12:45:43 -- host/async_init.sh@78 -- # nvmftestfini 00:17:44.099 12:45:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:44.099 12:45:43 -- nvmf/common.sh@117 -- # sync 00:17:44.099 12:45:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:44.099 12:45:43 -- nvmf/common.sh@120 -- # set +e 00:17:44.099 12:45:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:44.099 12:45:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:44.099 rmmod nvme_tcp 00:17:44.099 rmmod nvme_fabrics 00:17:44.099 rmmod nvme_keyring 00:17:44.099 12:45:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:44.099 12:45:43 -- nvmf/common.sh@124 -- # set -e 00:17:44.099 12:45:43 -- nvmf/common.sh@125 -- # return 0 00:17:44.099 12:45:43 -- nvmf/common.sh@478 -- # '[' -n 1216951 ']' 00:17:44.099 12:45:43 -- nvmf/common.sh@479 -- # killprocess 1216951 00:17:44.099 12:45:43 -- common/autotest_common.sh@936 -- # '[' -z 1216951 ']' 00:17:44.099 12:45:43 -- common/autotest_common.sh@940 -- # kill -0 1216951 00:17:44.099 12:45:43 -- common/autotest_common.sh@941 -- # uname 00:17:44.099 12:45:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.099 12:45:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1216951 00:17:44.099 12:45:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:44.099 12:45:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:44.099 12:45:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1216951' 00:17:44.099 killing process with pid 1216951 00:17:44.099 12:45:43 -- common/autotest_common.sh@955 -- # kill 1216951 00:17:44.099 [2024-04-16 12:45:43.077781] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:44.099 [2024-04-16 12:45:43.077830] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:44.099 12:45:43 -- common/autotest_common.sh@960 -- # wait 1216951 00:17:44.356 12:45:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:44.356 12:45:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:44.356 12:45:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:44.356 12:45:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.356 12:45:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.356 12:45:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.356 12:45:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.356 12:45:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.886 12:45:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.886 00:17:46.886 real 0m6.618s 00:17:46.887 user 0m3.018s 00:17:46.887 sys 0m2.220s 00:17:46.887 12:45:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:46.887 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:17:46.887 ************************************ 00:17:46.887 END TEST nvmf_async_init 00:17:46.887 ************************************ 00:17:46.887 12:45:45 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:46.887 12:45:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:46.887 12:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.887 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:17:46.887 ************************************ 00:17:46.887 START TEST dma 00:17:46.887 ************************************ 00:17:46.887 12:45:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:46.887 * Looking for test storage... 00:17:46.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:46.887 12:45:45 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.887 12:45:45 -- nvmf/common.sh@7 -- # uname -s 00:17:46.887 12:45:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.887 12:45:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.887 12:45:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.887 12:45:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.887 12:45:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.887 12:45:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.887 12:45:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.887 12:45:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.887 12:45:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.887 12:45:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.887 12:45:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:46.887 12:45:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:46.887 12:45:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.887 12:45:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.887 12:45:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.887 12:45:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.887 12:45:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.887 12:45:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.887 12:45:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.887 12:45:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.887 12:45:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.887 12:45:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.887 12:45:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.887 12:45:45 -- paths/export.sh@5 -- # export PATH 00:17:46.887 12:45:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.887 12:45:45 -- nvmf/common.sh@47 -- # : 0 00:17:46.887 12:45:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.887 12:45:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.887 12:45:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.887 12:45:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.887 12:45:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.887 12:45:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.887 12:45:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.887 12:45:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.887 12:45:45 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:46.887 12:45:45 -- host/dma.sh@13 -- # exit 0 00:17:46.887 00:17:46.887 real 0m0.073s 00:17:46.887 user 0m0.030s 00:17:46.887 sys 0m0.048s 00:17:46.887 12:45:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:46.887 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:17:46.887 ************************************ 00:17:46.887 END TEST dma 00:17:46.887 ************************************ 00:17:46.887 12:45:45 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:46.887 12:45:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:46.887 12:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.887 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:17:46.887 ************************************ 00:17:46.887 START TEST nvmf_identify 00:17:46.887 ************************************ 00:17:46.887 12:45:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:46.887 * Looking for test storage... 00:17:46.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:46.887 12:45:45 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.887 12:45:45 -- nvmf/common.sh@7 -- # uname -s 00:17:46.887 12:45:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.887 12:45:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.887 12:45:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.887 12:45:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.887 12:45:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.887 12:45:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.887 12:45:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.887 12:45:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.887 12:45:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.887 12:45:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.887 12:45:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:46.887 12:45:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:46.887 12:45:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.887 12:45:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.887 12:45:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.887 12:45:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.887 12:45:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.887 12:45:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.887 12:45:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.887 12:45:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.887 12:45:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.887 12:45:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.887 12:45:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.887 12:45:45 -- paths/export.sh@5 -- # export PATH 00:17:46.887 12:45:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.887 12:45:45 -- nvmf/common.sh@47 -- # : 0 00:17:46.887 12:45:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.887 12:45:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.887 12:45:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.887 12:45:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.887 12:45:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.887 12:45:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.887 12:45:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.888 12:45:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.888 12:45:45 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.888 12:45:45 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.888 12:45:45 -- host/identify.sh@14 -- # nvmftestinit 00:17:46.888 12:45:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:46.888 12:45:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.888 12:45:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:46.888 12:45:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:46.888 12:45:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:46.888 12:45:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.888 12:45:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.888 12:45:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.888 12:45:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:46.888 12:45:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:46.888 12:45:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.888 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:17:49.425 12:45:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:49.425 12:45:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.425 12:45:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.425 12:45:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.425 12:45:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.425 12:45:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.425 12:45:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.426 12:45:48 -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.426 12:45:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.426 12:45:48 -- nvmf/common.sh@296 -- # e810=() 00:17:49.426 12:45:48 -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.426 12:45:48 -- nvmf/common.sh@297 -- # x722=() 00:17:49.426 12:45:48 -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.426 12:45:48 -- nvmf/common.sh@298 -- # mlx=() 00:17:49.426 12:45:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.426 12:45:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.426 12:45:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.426 12:45:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:49.426 12:45:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.426 12:45:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.426 12:45:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:49.426 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:49.426 12:45:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.426 12:45:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:49.426 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:49.426 12:45:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.426 12:45:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.426 12:45:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.426 12:45:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:49.426 12:45:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.426 12:45:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:49.426 Found net devices under 0000:82:00.0: cvl_0_0 00:17:49.426 12:45:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.426 12:45:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.426 12:45:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.426 12:45:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:49.426 12:45:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.426 12:45:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:49.426 Found net devices under 0000:82:00.1: cvl_0_1 00:17:49.426 12:45:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.426 12:45:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:49.426 12:45:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:49.426 12:45:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:49.426 12:45:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.426 12:45:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.426 12:45:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.426 12:45:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:49.426 12:45:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.426 12:45:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.426 12:45:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:49.426 12:45:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.426 12:45:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.426 12:45:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:49.426 12:45:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:49.426 12:45:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.426 12:45:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.426 12:45:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.426 12:45:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.426 12:45:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:49.426 12:45:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.426 12:45:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.426 12:45:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.426 12:45:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:49.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:17:49.426 00:17:49.426 --- 10.0.0.2 ping statistics --- 00:17:49.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.426 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:49.426 12:45:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:17:49.426 00:17:49.426 --- 10.0.0.1 ping statistics --- 00:17:49.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.426 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:17:49.426 12:45:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.426 12:45:48 -- nvmf/common.sh@411 -- # return 0 00:17:49.426 12:45:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:49.426 12:45:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.426 12:45:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:49.426 12:45:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.426 12:45:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:49.426 12:45:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:49.426 12:45:48 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:49.426 12:45:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:49.426 12:45:48 -- common/autotest_common.sh@10 -- # set +x 00:17:49.426 12:45:48 -- host/identify.sh@19 -- # nvmfpid=1219515 00:17:49.426 12:45:48 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:49.426 12:45:48 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.426 12:45:48 -- host/identify.sh@23 -- # waitforlisten 1219515 00:17:49.426 12:45:48 -- common/autotest_common.sh@817 -- # '[' -z 1219515 ']' 00:17:49.426 12:45:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.426 12:45:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:49.426 12:45:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.426 12:45:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:49.426 12:45:48 -- common/autotest_common.sh@10 -- # set +x 00:17:49.426 [2024-04-16 12:45:48.457433] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:49.426 [2024-04-16 12:45:48.457517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.684 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.684 [2024-04-16 12:45:48.543329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.684 [2024-04-16 12:45:48.661257] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.684 [2024-04-16 12:45:48.661305] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.684 [2024-04-16 12:45:48.661321] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.684 [2024-04-16 12:45:48.661333] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.684 [2024-04-16 12:45:48.661358] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.684 [2024-04-16 12:45:48.661446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.684 [2024-04-16 12:45:48.661520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.684 [2024-04-16 12:45:48.661613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.684 [2024-04-16 12:45:48.661617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.619 12:45:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:50.619 12:45:49 -- common/autotest_common.sh@850 -- # return 0 00:17:50.619 12:45:49 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.619 12:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.619 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 [2024-04-16 12:45:49.443361] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.619 12:45:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.619 12:45:49 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:50.619 12:45:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:50.619 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 12:45:49 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:50.619 12:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.619 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 Malloc0 00:17:50.619 12:45:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.619 12:45:49 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.619 12:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.619 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 12:45:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.619 12:45:49 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:50.619 12:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.619 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 12:45:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.619 12:45:49 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.619 12:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.619 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 [2024-04-16 12:45:49.524922] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.619 12:45:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.619 12:45:49 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:50.619 12:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.619 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 12:45:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.619 12:45:49 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:50.619 12:45:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.619 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:17:50.619 [2024-04-16 12:45:49.540634] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:50.619 [ 00:17:50.619 { 00:17:50.619 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:50.619 "subtype": "Discovery", 00:17:50.619 "listen_addresses": [ 00:17:50.619 { 00:17:50.619 "transport": "TCP", 00:17:50.619 "trtype": "TCP", 00:17:50.619 "adrfam": "IPv4", 00:17:50.619 "traddr": "10.0.0.2", 00:17:50.619 "trsvcid": "4420" 00:17:50.619 } 00:17:50.619 ], 00:17:50.619 "allow_any_host": true, 00:17:50.619 "hosts": [] 00:17:50.619 }, 00:17:50.619 { 00:17:50.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.619 "subtype": "NVMe", 00:17:50.619 "listen_addresses": [ 00:17:50.619 { 00:17:50.619 "transport": "TCP", 00:17:50.619 "trtype": "TCP", 00:17:50.619 "adrfam": "IPv4", 00:17:50.619 "traddr": "10.0.0.2", 00:17:50.619 "trsvcid": "4420" 00:17:50.619 } 00:17:50.619 ], 00:17:50.619 "allow_any_host": true, 00:17:50.619 "hosts": [], 00:17:50.619 "serial_number": "SPDK00000000000001", 00:17:50.619 "model_number": "SPDK bdev Controller", 00:17:50.619 "max_namespaces": 32, 00:17:50.619 "min_cntlid": 1, 00:17:50.619 "max_cntlid": 65519, 00:17:50.619 "namespaces": [ 00:17:50.619 { 00:17:50.619 "nsid": 1, 00:17:50.619 "bdev_name": "Malloc0", 00:17:50.619 "name": "Malloc0", 00:17:50.619 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:50.619 "eui64": "ABCDEF0123456789", 00:17:50.619 "uuid": "6bf55d89-2f4e-4bcc-8ae5-dce7496ce466" 00:17:50.619 } 00:17:50.619 ] 00:17:50.619 } 00:17:50.619 ] 00:17:50.619 12:45:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.619 12:45:49 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:50.619 [2024-04-16 12:45:49.565795] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:50.619 [2024-04-16 12:45:49.565837] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219675 ] 00:17:50.619 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.619 [2024-04-16 12:45:49.601893] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:50.619 [2024-04-16 12:45:49.601964] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:50.619 [2024-04-16 12:45:49.601974] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:50.619 [2024-04-16 12:45:49.601992] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:50.619 [2024-04-16 12:45:49.602010] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:50.619 [2024-04-16 12:45:49.602392] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:50.619 [2024-04-16 12:45:49.602453] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1bc35b0 0 00:17:50.619 [2024-04-16 12:45:49.608586] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:50.619 [2024-04-16 12:45:49.608606] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:50.619 [2024-04-16 12:45:49.608620] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:50.619 [2024-04-16 12:45:49.608631] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:50.619 [2024-04-16 12:45:49.608680] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.619 [2024-04-16 12:45:49.608692] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.619 [2024-04-16 12:45:49.608699] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.619 [2024-04-16 12:45:49.608717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:50.619 [2024-04-16 12:45:49.608742] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.619 [2024-04-16 12:45:49.616583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.619 [2024-04-16 12:45:49.616609] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.619 [2024-04-16 12:45:49.616617] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.619 [2024-04-16 12:45:49.616625] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.619 [2024-04-16 12:45:49.616645] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:50.619 [2024-04-16 12:45:49.616656] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:50.619 [2024-04-16 12:45:49.616665] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:50.619 [2024-04-16 12:45:49.616685] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.619 [2024-04-16 12:45:49.616693] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.619 [2024-04-16 12:45:49.616700] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.619 [2024-04-16 12:45:49.616711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.619 [2024-04-16 12:45:49.616735] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.619 [2024-04-16 12:45:49.616949] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.619 [2024-04-16 12:45:49.616974] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.619 [2024-04-16 12:45:49.616980] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.619 [2024-04-16 12:45:49.616987] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.619 [2024-04-16 12:45:49.616998] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:50.620 [2024-04-16 12:45:49.617011] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:50.620 [2024-04-16 12:45:49.617024] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617031] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617037] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.620 [2024-04-16 12:45:49.617047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.620 [2024-04-16 12:45:49.617072] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.620 [2024-04-16 12:45:49.617264] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.620 [2024-04-16 12:45:49.617279] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.620 [2024-04-16 12:45:49.617285] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617292] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.620 [2024-04-16 12:45:49.617301] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:50.620 [2024-04-16 12:45:49.617315] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:50.620 [2024-04-16 12:45:49.617327] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617334] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617340] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.620 [2024-04-16 12:45:49.617349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.620 [2024-04-16 12:45:49.617370] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.620 [2024-04-16 12:45:49.617500] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.620 [2024-04-16 12:45:49.617515] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.620 [2024-04-16 12:45:49.617521] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617528] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.620 [2024-04-16 12:45:49.617538] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:50.620 [2024-04-16 12:45:49.617576] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617587] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617593] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.620 [2024-04-16 12:45:49.617604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.620 [2024-04-16 12:45:49.617625] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.620 [2024-04-16 12:45:49.617784] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.620 [2024-04-16 12:45:49.617796] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.620 [2024-04-16 12:45:49.617802] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617809] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.620 [2024-04-16 12:45:49.617818] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:50.620 [2024-04-16 12:45:49.617827] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:50.620 [2024-04-16 12:45:49.617840] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:50.620 [2024-04-16 12:45:49.617950] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:50.620 [2024-04-16 12:45:49.617959] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:50.620 [2024-04-16 12:45:49.617972] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617979] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.617989] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.620 [2024-04-16 12:45:49.618000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.620 [2024-04-16 12:45:49.618021] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.620 [2024-04-16 12:45:49.618140] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.620 [2024-04-16 12:45:49.618155] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.620 [2024-04-16 12:45:49.618161] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618168] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.620 [2024-04-16 12:45:49.618177] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:50.620 [2024-04-16 12:45:49.618193] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618201] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618208] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.620 [2024-04-16 12:45:49.618218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.620 [2024-04-16 12:45:49.618238] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.620 [2024-04-16 12:45:49.618354] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.620 [2024-04-16 12:45:49.618369] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.620 [2024-04-16 12:45:49.618375] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618381] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.620 [2024-04-16 12:45:49.618390] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:50.620 [2024-04-16 12:45:49.618398] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:50.620 [2024-04-16 12:45:49.618411] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:50.620 [2024-04-16 12:45:49.618429] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:50.620 [2024-04-16 12:45:49.618447] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618456] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.620 [2024-04-16 12:45:49.618466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.620 [2024-04-16 12:45:49.618487] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.620 [2024-04-16 12:45:49.618719] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.620 [2024-04-16 12:45:49.618735] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.620 [2024-04-16 12:45:49.618742] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618748] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc35b0): datao=0, datal=4096, cccid=0 00:17:50.620 [2024-04-16 12:45:49.618756] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c23410) on tqpair(0x1bc35b0): expected_datao=0, payload_size=4096 00:17:50.620 [2024-04-16 12:45:49.618764] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618775] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618782] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618799] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.620 [2024-04-16 12:45:49.618809] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.620 [2024-04-16 12:45:49.618816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618822] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.620 [2024-04-16 12:45:49.618835] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:50.620 [2024-04-16 12:45:49.618844] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:50.620 [2024-04-16 12:45:49.618852] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:50.620 [2024-04-16 12:45:49.618860] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:50.620 [2024-04-16 12:45:49.618867] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:50.620 [2024-04-16 12:45:49.618875] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:50.620 [2024-04-16 12:45:49.618905] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:50.620 [2024-04-16 12:45:49.618917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618924] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.618930] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.620 [2024-04-16 12:45:49.618941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.620 [2024-04-16 12:45:49.618961] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.620 [2024-04-16 12:45:49.619131] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.620 [2024-04-16 12:45:49.619142] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.620 [2024-04-16 12:45:49.619148] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.619155] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23410) on tqpair=0x1bc35b0 00:17:50.620 [2024-04-16 12:45:49.619167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.620 [2024-04-16 12:45:49.619174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619180] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.619190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.621 [2024-04-16 12:45:49.619200] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619207] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619212] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.619221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.621 [2024-04-16 12:45:49.619230] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619237] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619243] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.619251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.621 [2024-04-16 12:45:49.619260] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619266] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619276] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.619285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.621 [2024-04-16 12:45:49.619294] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:50.621 [2024-04-16 12:45:49.619312] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:50.621 [2024-04-16 12:45:49.619324] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619331] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.619341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.621 [2024-04-16 12:45:49.619363] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23410, cid 0, qid 0 00:17:50.621 [2024-04-16 12:45:49.619373] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23570, cid 1, qid 0 00:17:50.621 [2024-04-16 12:45:49.619381] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c236d0, cid 2, qid 0 00:17:50.621 [2024-04-16 12:45:49.619388] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.621 [2024-04-16 12:45:49.619395] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23990, cid 4, qid 0 00:17:50.621 [2024-04-16 12:45:49.619603] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.621 [2024-04-16 12:45:49.619619] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.621 [2024-04-16 12:45:49.619626] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619633] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23990) on tqpair=0x1bc35b0 00:17:50.621 [2024-04-16 12:45:49.619643] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:50.621 [2024-04-16 12:45:49.619652] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:50.621 [2024-04-16 12:45:49.619669] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619679] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.619690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.621 [2024-04-16 12:45:49.619711] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23990, cid 4, qid 0 00:17:50.621 [2024-04-16 12:45:49.619899] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.621 [2024-04-16 12:45:49.619913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.621 [2024-04-16 12:45:49.619920] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619926] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc35b0): datao=0, datal=4096, cccid=4 00:17:50.621 [2024-04-16 12:45:49.619933] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c23990) on tqpair(0x1bc35b0): expected_datao=0, payload_size=4096 00:17:50.621 [2024-04-16 12:45:49.619941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619950] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619958] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.619987] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.621 [2024-04-16 12:45:49.619997] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.621 [2024-04-16 12:45:49.620003] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620014] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23990) on tqpair=0x1bc35b0 00:17:50.621 [2024-04-16 12:45:49.620033] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:50.621 [2024-04-16 12:45:49.620063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620072] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.620083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.621 [2024-04-16 12:45:49.620094] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620101] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620107] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.620115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.621 [2024-04-16 12:45:49.620141] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23990, cid 4, qid 0 00:17:50.621 [2024-04-16 12:45:49.620152] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23af0, cid 5, qid 0 00:17:50.621 [2024-04-16 12:45:49.620352] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.621 [2024-04-16 12:45:49.620363] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.621 [2024-04-16 12:45:49.620370] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620376] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc35b0): datao=0, datal=1024, cccid=4 00:17:50.621 [2024-04-16 12:45:49.620383] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c23990) on tqpair(0x1bc35b0): expected_datao=0, payload_size=1024 00:17:50.621 [2024-04-16 12:45:49.620390] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620399] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620406] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620414] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.621 [2024-04-16 12:45:49.620422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.621 [2024-04-16 12:45:49.620428] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.620435] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23af0) on tqpair=0x1bc35b0 00:17:50.621 [2024-04-16 12:45:49.663594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.621 [2024-04-16 12:45:49.663613] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.621 [2024-04-16 12:45:49.663620] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.663627] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23990) on tqpair=0x1bc35b0 00:17:50.621 [2024-04-16 12:45:49.663651] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.663661] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.663672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.621 [2024-04-16 12:45:49.663703] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23990, cid 4, qid 0 00:17:50.621 [2024-04-16 12:45:49.663853] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.621 [2024-04-16 12:45:49.663869] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.621 [2024-04-16 12:45:49.663875] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.663882] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc35b0): datao=0, datal=3072, cccid=4 00:17:50.621 [2024-04-16 12:45:49.663904] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c23990) on tqpair(0x1bc35b0): expected_datao=0, payload_size=3072 00:17:50.621 [2024-04-16 12:45:49.663916] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.663926] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.663933] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.663958] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.621 [2024-04-16 12:45:49.663968] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.621 [2024-04-16 12:45:49.663974] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.663981] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23990) on tqpair=0x1bc35b0 00:17:50.621 [2024-04-16 12:45:49.663996] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.664004] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bc35b0) 00:17:50.621 [2024-04-16 12:45:49.664015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.621 [2024-04-16 12:45:49.664042] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23990, cid 4, qid 0 00:17:50.621 [2024-04-16 12:45:49.664187] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.621 [2024-04-16 12:45:49.664198] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.621 [2024-04-16 12:45:49.664204] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.664210] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bc35b0): datao=0, datal=8, cccid=4 00:17:50.621 [2024-04-16 12:45:49.664217] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c23990) on tqpair(0x1bc35b0): expected_datao=0, payload_size=8 00:17:50.621 [2024-04-16 12:45:49.664224] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.621 [2024-04-16 12:45:49.664234] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.622 [2024-04-16 12:45:49.664241] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.883 [2024-04-16 12:45:49.707579] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.883 [2024-04-16 12:45:49.707598] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.883 [2024-04-16 12:45:49.707605] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.883 [2024-04-16 12:45:49.707612] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23990) on tqpair=0x1bc35b0 00:17:50.883 ===================================================== 00:17:50.883 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:50.883 ===================================================== 00:17:50.883 Controller Capabilities/Features 00:17:50.883 ================================ 00:17:50.883 Vendor ID: 0000 00:17:50.883 Subsystem Vendor ID: 0000 00:17:50.883 Serial Number: .................... 00:17:50.883 Model Number: ........................................ 00:17:50.883 Firmware Version: 24.05 00:17:50.883 Recommended Arb Burst: 0 00:17:50.883 IEEE OUI Identifier: 00 00 00 00:17:50.883 Multi-path I/O 00:17:50.883 May have multiple subsystem ports: No 00:17:50.883 May have multiple controllers: No 00:17:50.883 Associated with SR-IOV VF: No 00:17:50.883 Max Data Transfer Size: 131072 00:17:50.883 Max Number of Namespaces: 0 00:17:50.883 Max Number of I/O Queues: 1024 00:17:50.883 NVMe Specification Version (VS): 1.3 00:17:50.883 NVMe Specification Version (Identify): 1.3 00:17:50.883 Maximum Queue Entries: 128 00:17:50.883 Contiguous Queues Required: Yes 00:17:50.883 Arbitration Mechanisms Supported 00:17:50.883 Weighted Round Robin: Not Supported 00:17:50.883 Vendor Specific: Not Supported 00:17:50.883 Reset Timeout: 15000 ms 00:17:50.883 Doorbell Stride: 4 bytes 00:17:50.883 NVM Subsystem Reset: Not Supported 00:17:50.883 Command Sets Supported 00:17:50.883 NVM Command Set: Supported 00:17:50.883 Boot Partition: Not Supported 00:17:50.883 Memory Page Size Minimum: 4096 bytes 00:17:50.883 Memory Page Size Maximum: 4096 bytes 00:17:50.883 Persistent Memory Region: Not Supported 00:17:50.883 Optional Asynchronous Events Supported 00:17:50.883 Namespace Attribute Notices: Not Supported 00:17:50.883 Firmware Activation Notices: Not Supported 00:17:50.883 ANA Change Notices: Not Supported 00:17:50.883 PLE Aggregate Log Change Notices: Not Supported 00:17:50.883 LBA Status Info Alert Notices: Not Supported 00:17:50.883 EGE Aggregate Log Change Notices: Not Supported 00:17:50.883 Normal NVM Subsystem Shutdown event: Not Supported 00:17:50.883 Zone Descriptor Change Notices: Not Supported 00:17:50.883 Discovery Log Change Notices: Supported 00:17:50.883 Controller Attributes 00:17:50.883 128-bit Host Identifier: Not Supported 00:17:50.883 Non-Operational Permissive Mode: Not Supported 00:17:50.883 NVM Sets: Not Supported 00:17:50.883 Read Recovery Levels: Not Supported 00:17:50.883 Endurance Groups: Not Supported 00:17:50.883 Predictable Latency Mode: Not Supported 00:17:50.883 Traffic Based Keep ALive: Not Supported 00:17:50.883 Namespace Granularity: Not Supported 00:17:50.883 SQ Associations: Not Supported 00:17:50.883 UUID List: Not Supported 00:17:50.883 Multi-Domain Subsystem: Not Supported 00:17:50.883 Fixed Capacity Management: Not Supported 00:17:50.883 Variable Capacity Management: Not Supported 00:17:50.883 Delete Endurance Group: Not Supported 00:17:50.883 Delete NVM Set: Not Supported 00:17:50.883 Extended LBA Formats Supported: Not Supported 00:17:50.883 Flexible Data Placement Supported: Not Supported 00:17:50.883 00:17:50.883 Controller Memory Buffer Support 00:17:50.883 ================================ 00:17:50.883 Supported: No 00:17:50.883 00:17:50.883 Persistent Memory Region Support 00:17:50.883 ================================ 00:17:50.883 Supported: No 00:17:50.883 00:17:50.883 Admin Command Set Attributes 00:17:50.883 ============================ 00:17:50.883 Security Send/Receive: Not Supported 00:17:50.883 Format NVM: Not Supported 00:17:50.883 Firmware Activate/Download: Not Supported 00:17:50.883 Namespace Management: Not Supported 00:17:50.883 Device Self-Test: Not Supported 00:17:50.883 Directives: Not Supported 00:17:50.883 NVMe-MI: Not Supported 00:17:50.883 Virtualization Management: Not Supported 00:17:50.883 Doorbell Buffer Config: Not Supported 00:17:50.883 Get LBA Status Capability: Not Supported 00:17:50.883 Command & Feature Lockdown Capability: Not Supported 00:17:50.883 Abort Command Limit: 1 00:17:50.883 Async Event Request Limit: 4 00:17:50.883 Number of Firmware Slots: N/A 00:17:50.883 Firmware Slot 1 Read-Only: N/A 00:17:50.883 Firmware Activation Without Reset: N/A 00:17:50.883 Multiple Update Detection Support: N/A 00:17:50.883 Firmware Update Granularity: No Information Provided 00:17:50.883 Per-Namespace SMART Log: No 00:17:50.883 Asymmetric Namespace Access Log Page: Not Supported 00:17:50.883 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:50.883 Command Effects Log Page: Not Supported 00:17:50.883 Get Log Page Extended Data: Supported 00:17:50.883 Telemetry Log Pages: Not Supported 00:17:50.883 Persistent Event Log Pages: Not Supported 00:17:50.883 Supported Log Pages Log Page: May Support 00:17:50.883 Commands Supported & Effects Log Page: Not Supported 00:17:50.883 Feature Identifiers & Effects Log Page:May Support 00:17:50.883 NVMe-MI Commands & Effects Log Page: May Support 00:17:50.883 Data Area 4 for Telemetry Log: Not Supported 00:17:50.883 Error Log Page Entries Supported: 128 00:17:50.883 Keep Alive: Not Supported 00:17:50.883 00:17:50.883 NVM Command Set Attributes 00:17:50.883 ========================== 00:17:50.883 Submission Queue Entry Size 00:17:50.883 Max: 1 00:17:50.883 Min: 1 00:17:50.883 Completion Queue Entry Size 00:17:50.883 Max: 1 00:17:50.883 Min: 1 00:17:50.883 Number of Namespaces: 0 00:17:50.883 Compare Command: Not Supported 00:17:50.883 Write Uncorrectable Command: Not Supported 00:17:50.883 Dataset Management Command: Not Supported 00:17:50.883 Write Zeroes Command: Not Supported 00:17:50.883 Set Features Save Field: Not Supported 00:17:50.883 Reservations: Not Supported 00:17:50.883 Timestamp: Not Supported 00:17:50.883 Copy: Not Supported 00:17:50.883 Volatile Write Cache: Not Present 00:17:50.883 Atomic Write Unit (Normal): 1 00:17:50.883 Atomic Write Unit (PFail): 1 00:17:50.883 Atomic Compare & Write Unit: 1 00:17:50.883 Fused Compare & Write: Supported 00:17:50.883 Scatter-Gather List 00:17:50.883 SGL Command Set: Supported 00:17:50.883 SGL Keyed: Supported 00:17:50.883 SGL Bit Bucket Descriptor: Not Supported 00:17:50.883 SGL Metadata Pointer: Not Supported 00:17:50.883 Oversized SGL: Not Supported 00:17:50.883 SGL Metadata Address: Not Supported 00:17:50.883 SGL Offset: Supported 00:17:50.883 Transport SGL Data Block: Not Supported 00:17:50.883 Replay Protected Memory Block: Not Supported 00:17:50.883 00:17:50.883 Firmware Slot Information 00:17:50.883 ========================= 00:17:50.883 Active slot: 0 00:17:50.883 00:17:50.883 00:17:50.883 Error Log 00:17:50.883 ========= 00:17:50.883 00:17:50.883 Active Namespaces 00:17:50.883 ================= 00:17:50.883 Discovery Log Page 00:17:50.883 ================== 00:17:50.883 Generation Counter: 2 00:17:50.883 Number of Records: 2 00:17:50.883 Record Format: 0 00:17:50.883 00:17:50.883 Discovery Log Entry 0 00:17:50.883 ---------------------- 00:17:50.883 Transport Type: 3 (TCP) 00:17:50.884 Address Family: 1 (IPv4) 00:17:50.884 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:50.884 Entry Flags: 00:17:50.884 Duplicate Returned Information: 1 00:17:50.884 Explicit Persistent Connection Support for Discovery: 1 00:17:50.884 Transport Requirements: 00:17:50.884 Secure Channel: Not Required 00:17:50.884 Port ID: 0 (0x0000) 00:17:50.884 Controller ID: 65535 (0xffff) 00:17:50.884 Admin Max SQ Size: 128 00:17:50.884 Transport Service Identifier: 4420 00:17:50.884 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:50.884 Transport Address: 10.0.0.2 00:17:50.884 Discovery Log Entry 1 00:17:50.884 ---------------------- 00:17:50.884 Transport Type: 3 (TCP) 00:17:50.884 Address Family: 1 (IPv4) 00:17:50.884 Subsystem Type: 2 (NVM Subsystem) 00:17:50.884 Entry Flags: 00:17:50.884 Duplicate Returned Information: 0 00:17:50.884 Explicit Persistent Connection Support for Discovery: 0 00:17:50.884 Transport Requirements: 00:17:50.884 Secure Channel: Not Required 00:17:50.884 Port ID: 0 (0x0000) 00:17:50.884 Controller ID: 65535 (0xffff) 00:17:50.884 Admin Max SQ Size: 128 00:17:50.884 Transport Service Identifier: 4420 00:17:50.884 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:50.884 Transport Address: 10.0.0.2 [2024-04-16 12:45:49.707741] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:50.884 [2024-04-16 12:45:49.707765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.884 [2024-04-16 12:45:49.707783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.884 [2024-04-16 12:45:49.707793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.884 [2024-04-16 12:45:49.707802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.884 [2024-04-16 12:45:49.707815] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.707823] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.707829] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.707840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.707866] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.708058] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.708073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.884 [2024-04-16 12:45:49.708083] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.884 [2024-04-16 12:45:49.708103] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708110] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708116] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.708126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.708152] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.708371] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.708385] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.884 [2024-04-16 12:45:49.708392] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708398] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.884 [2024-04-16 12:45:49.708407] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:50.884 [2024-04-16 12:45:49.708415] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:50.884 [2024-04-16 12:45:49.708431] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708439] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708445] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.708455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.708475] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.708708] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.708722] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.884 [2024-04-16 12:45:49.708729] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708735] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.884 [2024-04-16 12:45:49.708754] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708762] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.708769] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.708779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.708811] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.709012] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.709024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.884 [2024-04-16 12:45:49.709031] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709037] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.884 [2024-04-16 12:45:49.709053] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709062] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709068] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.709078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.709097] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.709220] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.709232] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.884 [2024-04-16 12:45:49.709238] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709245] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.884 [2024-04-16 12:45:49.709261] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709269] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709276] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.709285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.709305] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.709430] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.709441] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.884 [2024-04-16 12:45:49.709448] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709454] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.884 [2024-04-16 12:45:49.709470] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709479] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709485] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.709494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.709514] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.709656] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.709672] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.884 [2024-04-16 12:45:49.709679] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709685] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.884 [2024-04-16 12:45:49.709703] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709712] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.709728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.709749] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.709938] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.709953] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.884 [2024-04-16 12:45:49.709960] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709966] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.884 [2024-04-16 12:45:49.709984] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.884 [2024-04-16 12:45:49.709999] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.884 [2024-04-16 12:45:49.710009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.884 [2024-04-16 12:45:49.710029] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.884 [2024-04-16 12:45:49.710153] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.884 [2024-04-16 12:45:49.710165] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.710172] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710178] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.710210] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710219] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710225] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.710235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.710255] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.710457] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.710468] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.710475] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710481] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.710497] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710506] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710512] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.710521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.710541] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.710678] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.710693] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.710700] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710706] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.710724] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710733] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710739] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.710750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.710771] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.710923] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.710935] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.710941] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710948] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.710965] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710973] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.710979] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.710989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.711009] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.711172] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.711186] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.711193] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711199] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.711216] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711224] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711230] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.711240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.711260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.711373] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.711387] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.711393] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711400] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.711417] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711426] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711432] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.711441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.711461] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.711593] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.711606] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.711613] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711620] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.711637] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711646] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711652] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.711662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.711684] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.711810] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.711824] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.711831] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711837] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.711871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.711885] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.711895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.711915] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.712050] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.712061] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.712071] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712077] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.712094] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712102] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712109] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.712118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.712138] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.712251] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.712265] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.712272] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712278] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.712295] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712303] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712309] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.712319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.712339] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.712449] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.712463] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.712470] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712476] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.712493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.712507] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.712517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.712537] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.885 [2024-04-16 12:45:49.716577] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.885 [2024-04-16 12:45:49.716594] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.885 [2024-04-16 12:45:49.716601] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.716608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.885 [2024-04-16 12:45:49.716629] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.716639] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.885 [2024-04-16 12:45:49.716645] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bc35b0) 00:17:50.885 [2024-04-16 12:45:49.716657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.885 [2024-04-16 12:45:49.716680] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c23830, cid 3, qid 0 00:17:50.886 [2024-04-16 12:45:49.716863] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.886 [2024-04-16 12:45:49.716875] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.886 [2024-04-16 12:45:49.716882] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.716892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c23830) on tqpair=0x1bc35b0 00:17:50.886 [2024-04-16 12:45:49.716908] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:17:50.886 00:17:50.886 12:45:49 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:50.886 [2024-04-16 12:45:49.750210] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:50.886 [2024-04-16 12:45:49.750257] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219787 ] 00:17:50.886 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.886 [2024-04-16 12:45:49.785415] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:50.886 [2024-04-16 12:45:49.785464] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:50.886 [2024-04-16 12:45:49.785473] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:50.886 [2024-04-16 12:45:49.785486] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:50.886 [2024-04-16 12:45:49.785498] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:50.886 [2024-04-16 12:45:49.785809] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:50.886 [2024-04-16 12:45:49.785853] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8e65b0 0 00:17:50.886 [2024-04-16 12:45:49.792576] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:50.886 [2024-04-16 12:45:49.792596] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:50.886 [2024-04-16 12:45:49.792603] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:50.886 [2024-04-16 12:45:49.792610] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:50.886 [2024-04-16 12:45:49.792653] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.792664] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.792670] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.886 [2024-04-16 12:45:49.792684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:50.886 [2024-04-16 12:45:49.792715] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.886 [2024-04-16 12:45:49.800577] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.886 [2024-04-16 12:45:49.800604] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.886 [2024-04-16 12:45:49.800612] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.800618] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.886 [2024-04-16 12:45:49.800637] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:50.886 [2024-04-16 12:45:49.800649] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:50.886 [2024-04-16 12:45:49.800658] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:50.886 [2024-04-16 12:45:49.800677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.800686] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.800696] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.886 [2024-04-16 12:45:49.800708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.886 [2024-04-16 12:45:49.800732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.886 [2024-04-16 12:45:49.800891] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.886 [2024-04-16 12:45:49.800922] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.886 [2024-04-16 12:45:49.800928] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.800934] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.886 [2024-04-16 12:45:49.800942] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:50.886 [2024-04-16 12:45:49.800956] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:50.886 [2024-04-16 12:45:49.800968] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.800975] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.800981] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.886 [2024-04-16 12:45:49.800991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.886 [2024-04-16 12:45:49.801012] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.886 [2024-04-16 12:45:49.801157] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.886 [2024-04-16 12:45:49.801172] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.886 [2024-04-16 12:45:49.801178] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801184] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.886 [2024-04-16 12:45:49.801192] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:50.886 [2024-04-16 12:45:49.801206] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:50.886 [2024-04-16 12:45:49.801217] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801224] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801230] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.886 [2024-04-16 12:45:49.801240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.886 [2024-04-16 12:45:49.801260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.886 [2024-04-16 12:45:49.801383] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.886 [2024-04-16 12:45:49.801395] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.886 [2024-04-16 12:45:49.801401] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801407] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.886 [2024-04-16 12:45:49.801416] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:50.886 [2024-04-16 12:45:49.801431] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801440] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801446] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.886 [2024-04-16 12:45:49.801456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.886 [2024-04-16 12:45:49.801476] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.886 [2024-04-16 12:45:49.801614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.886 [2024-04-16 12:45:49.801630] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.886 [2024-04-16 12:45:49.801637] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801644] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.886 [2024-04-16 12:45:49.801651] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:50.886 [2024-04-16 12:45:49.801660] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:50.886 [2024-04-16 12:45:49.801673] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:50.886 [2024-04-16 12:45:49.801782] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:50.886 [2024-04-16 12:45:49.801790] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:50.886 [2024-04-16 12:45:49.801801] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801808] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.801814] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.886 [2024-04-16 12:45:49.801825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.886 [2024-04-16 12:45:49.801846] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.886 [2024-04-16 12:45:49.802037] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.886 [2024-04-16 12:45:49.802049] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.886 [2024-04-16 12:45:49.802057] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.802065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.886 [2024-04-16 12:45:49.802073] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:50.886 [2024-04-16 12:45:49.802089] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.802098] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.802104] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.886 [2024-04-16 12:45:49.802115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.886 [2024-04-16 12:45:49.802135] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.886 [2024-04-16 12:45:49.802291] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.886 [2024-04-16 12:45:49.802305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.886 [2024-04-16 12:45:49.802312] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.886 [2024-04-16 12:45:49.802318] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.886 [2024-04-16 12:45:49.802325] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:50.887 [2024-04-16 12:45:49.802333] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.802346] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:50.887 [2024-04-16 12:45:49.802360] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.802377] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.802386] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.887 [2024-04-16 12:45:49.802397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.887 [2024-04-16 12:45:49.802417] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.887 [2024-04-16 12:45:49.802637] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.887 [2024-04-16 12:45:49.802652] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.887 [2024-04-16 12:45:49.802659] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.802665] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e65b0): datao=0, datal=4096, cccid=0 00:17:50.887 [2024-04-16 12:45:49.802673] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x946410) on tqpair(0x8e65b0): expected_datao=0, payload_size=4096 00:17:50.887 [2024-04-16 12:45:49.802680] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.802704] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.802713] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.843693] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.887 [2024-04-16 12:45:49.843711] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.887 [2024-04-16 12:45:49.843718] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.843725] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.887 [2024-04-16 12:45:49.843736] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:50.887 [2024-04-16 12:45:49.843745] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:50.887 [2024-04-16 12:45:49.843753] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:50.887 [2024-04-16 12:45:49.843759] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:50.887 [2024-04-16 12:45:49.843766] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:50.887 [2024-04-16 12:45:49.843774] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.843789] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.843801] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.843809] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.843815] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.887 [2024-04-16 12:45:49.843828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.887 [2024-04-16 12:45:49.843850] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.887 [2024-04-16 12:45:49.843975] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.887 [2024-04-16 12:45:49.843990] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.887 [2024-04-16 12:45:49.843996] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844003] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946410) on tqpair=0x8e65b0 00:17:50.887 [2024-04-16 12:45:49.844013] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844020] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844026] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e65b0) 00:17:50.887 [2024-04-16 12:45:49.844039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.887 [2024-04-16 12:45:49.844050] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844056] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844062] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8e65b0) 00:17:50.887 [2024-04-16 12:45:49.844070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.887 [2024-04-16 12:45:49.844079] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844086] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844092] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8e65b0) 00:17:50.887 [2024-04-16 12:45:49.844100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.887 [2024-04-16 12:45:49.844109] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844115] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844121] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:50.887 [2024-04-16 12:45:49.844129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.887 [2024-04-16 12:45:49.844138] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.844156] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.844168] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844176] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e65b0) 00:17:50.887 [2024-04-16 12:45:49.844186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.887 [2024-04-16 12:45:49.844209] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946410, cid 0, qid 0 00:17:50.887 [2024-04-16 12:45:49.844220] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946570, cid 1, qid 0 00:17:50.887 [2024-04-16 12:45:49.844227] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9466d0, cid 2, qid 0 00:17:50.887 [2024-04-16 12:45:49.844234] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:50.887 [2024-04-16 12:45:49.844243] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946990, cid 4, qid 0 00:17:50.887 [2024-04-16 12:45:49.844429] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.887 [2024-04-16 12:45:49.844444] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.887 [2024-04-16 12:45:49.844451] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844458] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946990) on tqpair=0x8e65b0 00:17:50.887 [2024-04-16 12:45:49.844467] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:50.887 [2024-04-16 12:45:49.844476] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.844494] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.844507] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:50.887 [2024-04-16 12:45:49.844520] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844528] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.844534] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e65b0) 00:17:50.887 [2024-04-16 12:45:49.844560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.887 [2024-04-16 12:45:49.848605] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946990, cid 4, qid 0 00:17:50.887 [2024-04-16 12:45:49.848769] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.887 [2024-04-16 12:45:49.848784] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.887 [2024-04-16 12:45:49.848791] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.887 [2024-04-16 12:45:49.848798] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946990) on tqpair=0x8e65b0 00:17:50.888 [2024-04-16 12:45:49.848865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:50.888 [2024-04-16 12:45:49.848894] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:50.888 [2024-04-16 12:45:49.848907] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.848914] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e65b0) 00:17:50.888 [2024-04-16 12:45:49.848925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.888 [2024-04-16 12:45:49.848945] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946990, cid 4, qid 0 00:17:50.888 [2024-04-16 12:45:49.849182] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.888 [2024-04-16 12:45:49.849196] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.888 [2024-04-16 12:45:49.849203] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.849209] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e65b0): datao=0, datal=4096, cccid=4 00:17:50.888 [2024-04-16 12:45:49.849216] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x946990) on tqpair(0x8e65b0): expected_datao=0, payload_size=4096 00:17:50.888 [2024-04-16 12:45:49.849223] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.849267] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.849275] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.891575] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.888 [2024-04-16 12:45:49.891593] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.888 [2024-04-16 12:45:49.891601] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.891608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946990) on tqpair=0x8e65b0 00:17:50.888 [2024-04-16 12:45:49.891623] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:50.888 [2024-04-16 12:45:49.891644] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:50.888 [2024-04-16 12:45:49.891663] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:50.888 [2024-04-16 12:45:49.891680] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.891688] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e65b0) 00:17:50.888 [2024-04-16 12:45:49.891699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.888 [2024-04-16 12:45:49.891722] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946990, cid 4, qid 0 00:17:50.888 [2024-04-16 12:45:49.891917] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.888 [2024-04-16 12:45:49.891932] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.888 [2024-04-16 12:45:49.891939] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.891945] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e65b0): datao=0, datal=4096, cccid=4 00:17:50.888 [2024-04-16 12:45:49.891953] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x946990) on tqpair(0x8e65b0): expected_datao=0, payload_size=4096 00:17:50.888 [2024-04-16 12:45:49.891960] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.892000] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.892009] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.932719] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.888 [2024-04-16 12:45:49.932737] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.888 [2024-04-16 12:45:49.932744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.932751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946990) on tqpair=0x8e65b0 00:17:50.888 [2024-04-16 12:45:49.932772] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:50.888 [2024-04-16 12:45:49.932791] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:50.888 [2024-04-16 12:45:49.932805] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.932812] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e65b0) 00:17:50.888 [2024-04-16 12:45:49.932824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.888 [2024-04-16 12:45:49.932861] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946990, cid 4, qid 0 00:17:50.888 [2024-04-16 12:45:49.932983] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.888 [2024-04-16 12:45:49.932998] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.888 [2024-04-16 12:45:49.933004] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.933010] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e65b0): datao=0, datal=4096, cccid=4 00:17:50.888 [2024-04-16 12:45:49.933018] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x946990) on tqpair(0x8e65b0): expected_datao=0, payload_size=4096 00:17:50.888 [2024-04-16 12:45:49.933025] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.933088] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.888 [2024-04-16 12:45:49.933097] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.973795] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.148 [2024-04-16 12:45:49.973823] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.148 [2024-04-16 12:45:49.973831] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.973837] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946990) on tqpair=0x8e65b0 00:17:51.148 [2024-04-16 12:45:49.973866] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:51.148 [2024-04-16 12:45:49.973881] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:51.148 [2024-04-16 12:45:49.973896] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:51.148 [2024-04-16 12:45:49.973906] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:51.148 [2024-04-16 12:45:49.973920] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:51.148 [2024-04-16 12:45:49.973929] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:51.148 [2024-04-16 12:45:49.973937] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:51.148 [2024-04-16 12:45:49.973945] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:51.148 [2024-04-16 12:45:49.973970] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.973978] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e65b0) 00:17:51.148 [2024-04-16 12:45:49.973989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.148 [2024-04-16 12:45:49.973999] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974006] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e65b0) 00:17:51.148 [2024-04-16 12:45:49.974021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.148 [2024-04-16 12:45:49.974046] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946990, cid 4, qid 0 00:17:51.148 [2024-04-16 12:45:49.974058] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946af0, cid 5, qid 0 00:17:51.148 [2024-04-16 12:45:49.974217] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.148 [2024-04-16 12:45:49.974232] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.148 [2024-04-16 12:45:49.974238] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974244] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946990) on tqpair=0x8e65b0 00:17:51.148 [2024-04-16 12:45:49.974254] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.148 [2024-04-16 12:45:49.974262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.148 [2024-04-16 12:45:49.974268] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974275] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946af0) on tqpair=0x8e65b0 00:17:51.148 [2024-04-16 12:45:49.974290] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974298] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e65b0) 00:17:51.148 [2024-04-16 12:45:49.974308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.148 [2024-04-16 12:45:49.974339] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946af0, cid 5, qid 0 00:17:51.148 [2024-04-16 12:45:49.974537] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.148 [2024-04-16 12:45:49.974575] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.148 [2024-04-16 12:45:49.974583] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974590] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946af0) on tqpair=0x8e65b0 00:17:51.148 [2024-04-16 12:45:49.974607] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e65b0) 00:17:51.148 [2024-04-16 12:45:49.974627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.148 [2024-04-16 12:45:49.974649] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946af0, cid 5, qid 0 00:17:51.148 [2024-04-16 12:45:49.974827] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.148 [2024-04-16 12:45:49.974861] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.148 [2024-04-16 12:45:49.974869] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974875] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946af0) on tqpair=0x8e65b0 00:17:51.148 [2024-04-16 12:45:49.974891] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.974899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e65b0) 00:17:51.148 [2024-04-16 12:45:49.974909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.148 [2024-04-16 12:45:49.974935] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946af0, cid 5, qid 0 00:17:51.148 [2024-04-16 12:45:49.975162] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.148 [2024-04-16 12:45:49.975173] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.148 [2024-04-16 12:45:49.975180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.975186] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946af0) on tqpair=0x8e65b0 00:17:51.148 [2024-04-16 12:45:49.975204] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.148 [2024-04-16 12:45:49.975213] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e65b0) 00:17:51.148 [2024-04-16 12:45:49.975224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.148 [2024-04-16 12:45:49.975244] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.975251] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e65b0) 00:17:51.149 [2024-04-16 12:45:49.975260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.149 [2024-04-16 12:45:49.975271] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.975278] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8e65b0) 00:17:51.149 [2024-04-16 12:45:49.975287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.149 [2024-04-16 12:45:49.975298] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.975305] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8e65b0) 00:17:51.149 [2024-04-16 12:45:49.975314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.149 [2024-04-16 12:45:49.975334] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946af0, cid 5, qid 0 00:17:51.149 [2024-04-16 12:45:49.975344] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946990, cid 4, qid 0 00:17:51.149 [2024-04-16 12:45:49.975352] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946c50, cid 6, qid 0 00:17:51.149 [2024-04-16 12:45:49.975359] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946db0, cid 7, qid 0 00:17:51.149 [2024-04-16 12:45:49.979577] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.149 [2024-04-16 12:45:49.979594] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.149 [2024-04-16 12:45:49.979602] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979608] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e65b0): datao=0, datal=8192, cccid=5 00:17:51.149 [2024-04-16 12:45:49.979616] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x946af0) on tqpair(0x8e65b0): expected_datao=0, payload_size=8192 00:17:51.149 [2024-04-16 12:45:49.979624] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979638] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979646] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979654] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.149 [2024-04-16 12:45:49.979663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.149 [2024-04-16 12:45:49.979669] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979676] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e65b0): datao=0, datal=512, cccid=4 00:17:51.149 [2024-04-16 12:45:49.979683] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x946990) on tqpair(0x8e65b0): expected_datao=0, payload_size=512 00:17:51.149 [2024-04-16 12:45:49.979690] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979699] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979706] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979714] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.149 [2024-04-16 12:45:49.979723] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.149 [2024-04-16 12:45:49.979729] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979735] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e65b0): datao=0, datal=512, cccid=6 00:17:51.149 [2024-04-16 12:45:49.979743] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x946c50) on tqpair(0x8e65b0): expected_datao=0, payload_size=512 00:17:51.149 [2024-04-16 12:45:49.979750] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979759] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979765] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979773] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.149 [2024-04-16 12:45:49.979782] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.149 [2024-04-16 12:45:49.979788] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979794] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e65b0): datao=0, datal=4096, cccid=7 00:17:51.149 [2024-04-16 12:45:49.979802] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x946db0) on tqpair(0x8e65b0): expected_datao=0, payload_size=4096 00:17:51.149 [2024-04-16 12:45:49.979809] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979818] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979825] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979833] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.149 [2024-04-16 12:45:49.979842] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.149 [2024-04-16 12:45:49.979862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979869] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946af0) on tqpair=0x8e65b0 00:17:51.149 [2024-04-16 12:45:49.979888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.149 [2024-04-16 12:45:49.979899] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.149 [2024-04-16 12:45:49.979906] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979912] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946990) on tqpair=0x8e65b0 00:17:51.149 [2024-04-16 12:45:49.979925] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.149 [2024-04-16 12:45:49.979936] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.149 [2024-04-16 12:45:49.979942] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979948] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946c50) on tqpair=0x8e65b0 00:17:51.149 [2024-04-16 12:45:49.979958] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.149 [2024-04-16 12:45:49.979969] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.149 [2024-04-16 12:45:49.979976] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.149 [2024-04-16 12:45:49.979982] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946db0) on tqpair=0x8e65b0 00:17:51.149 ===================================================== 00:17:51.149 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.149 ===================================================== 00:17:51.149 Controller Capabilities/Features 00:17:51.149 ================================ 00:17:51.149 Vendor ID: 8086 00:17:51.149 Subsystem Vendor ID: 8086 00:17:51.149 Serial Number: SPDK00000000000001 00:17:51.149 Model Number: SPDK bdev Controller 00:17:51.149 Firmware Version: 24.05 00:17:51.149 Recommended Arb Burst: 6 00:17:51.149 IEEE OUI Identifier: e4 d2 5c 00:17:51.149 Multi-path I/O 00:17:51.149 May have multiple subsystem ports: Yes 00:17:51.149 May have multiple controllers: Yes 00:17:51.149 Associated with SR-IOV VF: No 00:17:51.149 Max Data Transfer Size: 131072 00:17:51.149 Max Number of Namespaces: 32 00:17:51.149 Max Number of I/O Queues: 127 00:17:51.149 NVMe Specification Version (VS): 1.3 00:17:51.149 NVMe Specification Version (Identify): 1.3 00:17:51.149 Maximum Queue Entries: 128 00:17:51.149 Contiguous Queues Required: Yes 00:17:51.149 Arbitration Mechanisms Supported 00:17:51.149 Weighted Round Robin: Not Supported 00:17:51.149 Vendor Specific: Not Supported 00:17:51.149 Reset Timeout: 15000 ms 00:17:51.149 Doorbell Stride: 4 bytes 00:17:51.149 NVM Subsystem Reset: Not Supported 00:17:51.149 Command Sets Supported 00:17:51.149 NVM Command Set: Supported 00:17:51.149 Boot Partition: Not Supported 00:17:51.149 Memory Page Size Minimum: 4096 bytes 00:17:51.149 Memory Page Size Maximum: 4096 bytes 00:17:51.149 Persistent Memory Region: Not Supported 00:17:51.149 Optional Asynchronous Events Supported 00:17:51.149 Namespace Attribute Notices: Supported 00:17:51.149 Firmware Activation Notices: Not Supported 00:17:51.149 ANA Change Notices: Not Supported 00:17:51.149 PLE Aggregate Log Change Notices: Not Supported 00:17:51.149 LBA Status Info Alert Notices: Not Supported 00:17:51.149 EGE Aggregate Log Change Notices: Not Supported 00:17:51.149 Normal NVM Subsystem Shutdown event: Not Supported 00:17:51.149 Zone Descriptor Change Notices: Not Supported 00:17:51.149 Discovery Log Change Notices: Not Supported 00:17:51.149 Controller Attributes 00:17:51.149 128-bit Host Identifier: Supported 00:17:51.149 Non-Operational Permissive Mode: Not Supported 00:17:51.149 NVM Sets: Not Supported 00:17:51.149 Read Recovery Levels: Not Supported 00:17:51.149 Endurance Groups: Not Supported 00:17:51.149 Predictable Latency Mode: Not Supported 00:17:51.149 Traffic Based Keep ALive: Not Supported 00:17:51.149 Namespace Granularity: Not Supported 00:17:51.149 SQ Associations: Not Supported 00:17:51.149 UUID List: Not Supported 00:17:51.149 Multi-Domain Subsystem: Not Supported 00:17:51.149 Fixed Capacity Management: Not Supported 00:17:51.149 Variable Capacity Management: Not Supported 00:17:51.149 Delete Endurance Group: Not Supported 00:17:51.149 Delete NVM Set: Not Supported 00:17:51.150 Extended LBA Formats Supported: Not Supported 00:17:51.150 Flexible Data Placement Supported: Not Supported 00:17:51.150 00:17:51.150 Controller Memory Buffer Support 00:17:51.150 ================================ 00:17:51.150 Supported: No 00:17:51.150 00:17:51.150 Persistent Memory Region Support 00:17:51.150 ================================ 00:17:51.150 Supported: No 00:17:51.150 00:17:51.150 Admin Command Set Attributes 00:17:51.150 ============================ 00:17:51.150 Security Send/Receive: Not Supported 00:17:51.150 Format NVM: Not Supported 00:17:51.150 Firmware Activate/Download: Not Supported 00:17:51.150 Namespace Management: Not Supported 00:17:51.150 Device Self-Test: Not Supported 00:17:51.150 Directives: Not Supported 00:17:51.150 NVMe-MI: Not Supported 00:17:51.150 Virtualization Management: Not Supported 00:17:51.150 Doorbell Buffer Config: Not Supported 00:17:51.150 Get LBA Status Capability: Not Supported 00:17:51.150 Command & Feature Lockdown Capability: Not Supported 00:17:51.150 Abort Command Limit: 4 00:17:51.150 Async Event Request Limit: 4 00:17:51.150 Number of Firmware Slots: N/A 00:17:51.150 Firmware Slot 1 Read-Only: N/A 00:17:51.150 Firmware Activation Without Reset: N/A 00:17:51.150 Multiple Update Detection Support: N/A 00:17:51.150 Firmware Update Granularity: No Information Provided 00:17:51.150 Per-Namespace SMART Log: No 00:17:51.150 Asymmetric Namespace Access Log Page: Not Supported 00:17:51.150 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:51.150 Command Effects Log Page: Supported 00:17:51.150 Get Log Page Extended Data: Supported 00:17:51.150 Telemetry Log Pages: Not Supported 00:17:51.150 Persistent Event Log Pages: Not Supported 00:17:51.150 Supported Log Pages Log Page: May Support 00:17:51.150 Commands Supported & Effects Log Page: Not Supported 00:17:51.150 Feature Identifiers & Effects Log Page:May Support 00:17:51.150 NVMe-MI Commands & Effects Log Page: May Support 00:17:51.150 Data Area 4 for Telemetry Log: Not Supported 00:17:51.150 Error Log Page Entries Supported: 128 00:17:51.150 Keep Alive: Supported 00:17:51.150 Keep Alive Granularity: 10000 ms 00:17:51.150 00:17:51.150 NVM Command Set Attributes 00:17:51.150 ========================== 00:17:51.150 Submission Queue Entry Size 00:17:51.150 Max: 64 00:17:51.150 Min: 64 00:17:51.150 Completion Queue Entry Size 00:17:51.150 Max: 16 00:17:51.150 Min: 16 00:17:51.150 Number of Namespaces: 32 00:17:51.150 Compare Command: Supported 00:17:51.150 Write Uncorrectable Command: Not Supported 00:17:51.150 Dataset Management Command: Supported 00:17:51.150 Write Zeroes Command: Supported 00:17:51.150 Set Features Save Field: Not Supported 00:17:51.150 Reservations: Supported 00:17:51.150 Timestamp: Not Supported 00:17:51.150 Copy: Supported 00:17:51.150 Volatile Write Cache: Present 00:17:51.150 Atomic Write Unit (Normal): 1 00:17:51.150 Atomic Write Unit (PFail): 1 00:17:51.150 Atomic Compare & Write Unit: 1 00:17:51.150 Fused Compare & Write: Supported 00:17:51.150 Scatter-Gather List 00:17:51.150 SGL Command Set: Supported 00:17:51.150 SGL Keyed: Supported 00:17:51.150 SGL Bit Bucket Descriptor: Not Supported 00:17:51.150 SGL Metadata Pointer: Not Supported 00:17:51.150 Oversized SGL: Not Supported 00:17:51.150 SGL Metadata Address: Not Supported 00:17:51.150 SGL Offset: Supported 00:17:51.150 Transport SGL Data Block: Not Supported 00:17:51.150 Replay Protected Memory Block: Not Supported 00:17:51.150 00:17:51.150 Firmware Slot Information 00:17:51.150 ========================= 00:17:51.150 Active slot: 1 00:17:51.150 Slot 1 Firmware Revision: 24.05 00:17:51.150 00:17:51.150 00:17:51.150 Commands Supported and Effects 00:17:51.150 ============================== 00:17:51.150 Admin Commands 00:17:51.150 -------------- 00:17:51.150 Get Log Page (02h): Supported 00:17:51.150 Identify (06h): Supported 00:17:51.150 Abort (08h): Supported 00:17:51.150 Set Features (09h): Supported 00:17:51.150 Get Features (0Ah): Supported 00:17:51.150 Asynchronous Event Request (0Ch): Supported 00:17:51.150 Keep Alive (18h): Supported 00:17:51.150 I/O Commands 00:17:51.150 ------------ 00:17:51.150 Flush (00h): Supported LBA-Change 00:17:51.150 Write (01h): Supported LBA-Change 00:17:51.150 Read (02h): Supported 00:17:51.150 Compare (05h): Supported 00:17:51.150 Write Zeroes (08h): Supported LBA-Change 00:17:51.150 Dataset Management (09h): Supported LBA-Change 00:17:51.150 Copy (19h): Supported LBA-Change 00:17:51.150 Unknown (79h): Supported LBA-Change 00:17:51.150 Unknown (7Ah): Supported 00:17:51.150 00:17:51.150 Error Log 00:17:51.150 ========= 00:17:51.150 00:17:51.150 Arbitration 00:17:51.150 =========== 00:17:51.150 Arbitration Burst: 1 00:17:51.150 00:17:51.150 Power Management 00:17:51.150 ================ 00:17:51.150 Number of Power States: 1 00:17:51.150 Current Power State: Power State #0 00:17:51.150 Power State #0: 00:17:51.150 Max Power: 0.00 W 00:17:51.150 Non-Operational State: Operational 00:17:51.150 Entry Latency: Not Reported 00:17:51.150 Exit Latency: Not Reported 00:17:51.150 Relative Read Throughput: 0 00:17:51.150 Relative Read Latency: 0 00:17:51.150 Relative Write Throughput: 0 00:17:51.150 Relative Write Latency: 0 00:17:51.150 Idle Power: Not Reported 00:17:51.150 Active Power: Not Reported 00:17:51.150 Non-Operational Permissive Mode: Not Supported 00:17:51.150 00:17:51.150 Health Information 00:17:51.150 ================== 00:17:51.150 Critical Warnings: 00:17:51.150 Available Spare Space: OK 00:17:51.150 Temperature: OK 00:17:51.150 Device Reliability: OK 00:17:51.150 Read Only: No 00:17:51.150 Volatile Memory Backup: OK 00:17:51.150 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:51.150 Temperature Threshold: [2024-04-16 12:45:49.980110] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.150 [2024-04-16 12:45:49.980122] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8e65b0) 00:17:51.150 [2024-04-16 12:45:49.980133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.150 [2024-04-16 12:45:49.980155] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946db0, cid 7, qid 0 00:17:51.150 [2024-04-16 12:45:49.980333] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.150 [2024-04-16 12:45:49.980348] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.150 [2024-04-16 12:45:49.980354] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.150 [2024-04-16 12:45:49.980361] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946db0) on tqpair=0x8e65b0 00:17:51.150 [2024-04-16 12:45:49.980399] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:51.150 [2024-04-16 12:45:49.980419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.150 [2024-04-16 12:45:49.980430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.150 [2024-04-16 12:45:49.980440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.150 [2024-04-16 12:45:49.980448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.150 [2024-04-16 12:45:49.980460] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.150 [2024-04-16 12:45:49.980467] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.150 [2024-04-16 12:45:49.980473] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.150 [2024-04-16 12:45:49.980484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.150 [2024-04-16 12:45:49.980505] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.150 [2024-04-16 12:45:49.980717] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.150 [2024-04-16 12:45:49.980733] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.150 [2024-04-16 12:45:49.980739] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.150 [2024-04-16 12:45:49.980746] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.150 [2024-04-16 12:45:49.980757] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.150 [2024-04-16 12:45:49.980764] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.150 [2024-04-16 12:45:49.980770] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.150 [2024-04-16 12:45:49.980781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.980806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.980974] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.980989] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.980995] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981002] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.981009] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:51.151 [2024-04-16 12:45:49.981020] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:51.151 [2024-04-16 12:45:49.981036] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981045] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981051] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.981061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.981080] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.981242] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.981256] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.981263] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981269] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.981285] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981293] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981300] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.981309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.981329] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.981450] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.981461] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.981467] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981473] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.981488] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981497] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981503] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.981513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.981532] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.981721] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.981737] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.981744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981750] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.981767] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981775] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981782] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.981792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.981819] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.981962] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.981977] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.981983] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.981990] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.982009] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982019] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982025] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.982035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.982055] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.982178] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.982189] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.982196] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982202] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.982217] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982226] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982232] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.982241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.982260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.982420] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.982432] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.982438] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982444] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.982460] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982468] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982474] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.982484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.982502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.982677] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.982691] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.982698] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982704] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.982720] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982729] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982735] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.982745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.982766] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.982917] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.982932] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.982938] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982944] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.982964] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982973] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.982980] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.982990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.983010] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.983124] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.983135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.983142] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.983148] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.983163] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.983171] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.983177] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.983187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.151 [2024-04-16 12:45:49.983206] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.151 [2024-04-16 12:45:49.983365] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.151 [2024-04-16 12:45:49.983379] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.151 [2024-04-16 12:45:49.983386] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.983392] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.151 [2024-04-16 12:45:49.983407] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.983416] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.151 [2024-04-16 12:45:49.983422] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.151 [2024-04-16 12:45:49.983432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.152 [2024-04-16 12:45:49.983451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.152 [2024-04-16 12:45:49.987588] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.152 [2024-04-16 12:45:49.987606] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.152 [2024-04-16 12:45:49.987613] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.152 [2024-04-16 12:45:49.987620] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.152 [2024-04-16 12:45:49.987638] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.152 [2024-04-16 12:45:49.987647] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.152 [2024-04-16 12:45:49.987653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e65b0) 00:17:51.152 [2024-04-16 12:45:49.987664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.152 [2024-04-16 12:45:49.987685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x946830, cid 3, qid 0 00:17:51.152 [2024-04-16 12:45:49.987848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.152 [2024-04-16 12:45:49.987863] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.152 [2024-04-16 12:45:49.987885] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.152 [2024-04-16 12:45:49.987892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x946830) on tqpair=0x8e65b0 00:17:51.152 [2024-04-16 12:45:49.987905] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:17:51.152 0 Kelvin (-273 Celsius) 00:17:51.152 Available Spare: 0% 00:17:51.152 Available Spare Threshold: 0% 00:17:51.152 Life Percentage Used: 0% 00:17:51.152 Data Units Read: 0 00:17:51.152 Data Units Written: 0 00:17:51.152 Host Read Commands: 0 00:17:51.152 Host Write Commands: 0 00:17:51.152 Controller Busy Time: 0 minutes 00:17:51.152 Power Cycles: 0 00:17:51.152 Power On Hours: 0 hours 00:17:51.152 Unsafe Shutdowns: 0 00:17:51.152 Unrecoverable Media Errors: 0 00:17:51.152 Lifetime Error Log Entries: 0 00:17:51.152 Warning Temperature Time: 0 minutes 00:17:51.152 Critical Temperature Time: 0 minutes 00:17:51.152 00:17:51.152 Number of Queues 00:17:51.152 ================ 00:17:51.152 Number of I/O Submission Queues: 127 00:17:51.152 Number of I/O Completion Queues: 127 00:17:51.152 00:17:51.152 Active Namespaces 00:17:51.152 ================= 00:17:51.152 Namespace ID:1 00:17:51.152 Error Recovery Timeout: Unlimited 00:17:51.152 Command Set Identifier: NVM (00h) 00:17:51.152 Deallocate: Supported 00:17:51.152 Deallocated/Unwritten Error: Not Supported 00:17:51.152 Deallocated Read Value: Unknown 00:17:51.152 Deallocate in Write Zeroes: Not Supported 00:17:51.152 Deallocated Guard Field: 0xFFFF 00:17:51.152 Flush: Supported 00:17:51.152 Reservation: Supported 00:17:51.152 Namespace Sharing Capabilities: Multiple Controllers 00:17:51.152 Size (in LBAs): 131072 (0GiB) 00:17:51.152 Capacity (in LBAs): 131072 (0GiB) 00:17:51.152 Utilization (in LBAs): 131072 (0GiB) 00:17:51.152 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:51.152 EUI64: ABCDEF0123456789 00:17:51.152 UUID: 6bf55d89-2f4e-4bcc-8ae5-dce7496ce466 00:17:51.152 Thin Provisioning: Not Supported 00:17:51.152 Per-NS Atomic Units: Yes 00:17:51.152 Atomic Boundary Size (Normal): 0 00:17:51.152 Atomic Boundary Size (PFail): 0 00:17:51.152 Atomic Boundary Offset: 0 00:17:51.152 Maximum Single Source Range Length: 65535 00:17:51.152 Maximum Copy Length: 65535 00:17:51.152 Maximum Source Range Count: 1 00:17:51.152 NGUID/EUI64 Never Reused: No 00:17:51.152 Namespace Write Protected: No 00:17:51.152 Number of LBA Formats: 1 00:17:51.152 Current LBA Format: LBA Format #00 00:17:51.152 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:51.152 00:17:51.152 12:45:49 -- host/identify.sh@51 -- # sync 00:17:51.152 12:45:50 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.152 12:45:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.152 12:45:50 -- common/autotest_common.sh@10 -- # set +x 00:17:51.152 12:45:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.152 12:45:50 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:51.152 12:45:50 -- host/identify.sh@56 -- # nvmftestfini 00:17:51.152 12:45:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:51.152 12:45:50 -- nvmf/common.sh@117 -- # sync 00:17:51.152 12:45:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.152 12:45:50 -- nvmf/common.sh@120 -- # set +e 00:17:51.152 12:45:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.152 12:45:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.152 rmmod nvme_tcp 00:17:51.152 rmmod nvme_fabrics 00:17:51.152 rmmod nvme_keyring 00:17:51.152 12:45:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.152 12:45:50 -- nvmf/common.sh@124 -- # set -e 00:17:51.152 12:45:50 -- nvmf/common.sh@125 -- # return 0 00:17:51.152 12:45:50 -- nvmf/common.sh@478 -- # '[' -n 1219515 ']' 00:17:51.152 12:45:50 -- nvmf/common.sh@479 -- # killprocess 1219515 00:17:51.152 12:45:50 -- common/autotest_common.sh@936 -- # '[' -z 1219515 ']' 00:17:51.152 12:45:50 -- common/autotest_common.sh@940 -- # kill -0 1219515 00:17:51.152 12:45:50 -- common/autotest_common.sh@941 -- # uname 00:17:51.152 12:45:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.152 12:45:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1219515 00:17:51.152 12:45:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:51.152 12:45:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:51.152 12:45:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1219515' 00:17:51.152 killing process with pid 1219515 00:17:51.152 12:45:50 -- common/autotest_common.sh@955 -- # kill 1219515 00:17:51.152 [2024-04-16 12:45:50.083065] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:51.152 12:45:50 -- common/autotest_common.sh@960 -- # wait 1219515 00:17:51.412 12:45:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:51.412 12:45:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:51.412 12:45:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:51.412 12:45:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.412 12:45:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.412 12:45:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.412 12:45:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.412 12:45:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.944 12:45:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:53.944 00:17:53.944 real 0m6.731s 00:17:53.944 user 0m7.740s 00:17:53.944 sys 0m2.310s 00:17:53.944 12:45:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:53.944 12:45:52 -- common/autotest_common.sh@10 -- # set +x 00:17:53.944 ************************************ 00:17:53.944 END TEST nvmf_identify 00:17:53.944 ************************************ 00:17:53.944 12:45:52 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:53.944 12:45:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:53.944 12:45:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.944 12:45:52 -- common/autotest_common.sh@10 -- # set +x 00:17:53.944 ************************************ 00:17:53.944 START TEST nvmf_perf 00:17:53.944 ************************************ 00:17:53.944 12:45:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:53.944 * Looking for test storage... 00:17:53.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:53.944 12:45:52 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.944 12:45:52 -- nvmf/common.sh@7 -- # uname -s 00:17:53.944 12:45:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.944 12:45:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.944 12:45:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.944 12:45:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.944 12:45:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.944 12:45:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.944 12:45:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.944 12:45:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.944 12:45:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.944 12:45:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.944 12:45:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:17:53.944 12:45:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:17:53.944 12:45:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.944 12:45:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.944 12:45:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.944 12:45:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.944 12:45:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.944 12:45:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.944 12:45:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.944 12:45:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.944 12:45:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.945 12:45:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.945 12:45:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.945 12:45:52 -- paths/export.sh@5 -- # export PATH 00:17:53.945 12:45:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.945 12:45:52 -- nvmf/common.sh@47 -- # : 0 00:17:53.945 12:45:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.945 12:45:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.945 12:45:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.945 12:45:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.945 12:45:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.945 12:45:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.945 12:45:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.945 12:45:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.945 12:45:52 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:53.945 12:45:52 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:53.945 12:45:52 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.945 12:45:52 -- host/perf.sh@17 -- # nvmftestinit 00:17:53.945 12:45:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:53.945 12:45:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.945 12:45:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:53.945 12:45:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:53.945 12:45:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:53.945 12:45:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.945 12:45:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.945 12:45:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.945 12:45:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:53.945 12:45:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:53.945 12:45:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.945 12:45:52 -- common/autotest_common.sh@10 -- # set +x 00:17:56.473 12:45:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:56.473 12:45:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:56.473 12:45:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:56.473 12:45:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:56.473 12:45:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:56.473 12:45:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:56.473 12:45:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:56.473 12:45:54 -- nvmf/common.sh@295 -- # net_devs=() 00:17:56.473 12:45:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:56.473 12:45:54 -- nvmf/common.sh@296 -- # e810=() 00:17:56.473 12:45:54 -- nvmf/common.sh@296 -- # local -ga e810 00:17:56.473 12:45:54 -- nvmf/common.sh@297 -- # x722=() 00:17:56.473 12:45:54 -- nvmf/common.sh@297 -- # local -ga x722 00:17:56.473 12:45:54 -- nvmf/common.sh@298 -- # mlx=() 00:17:56.473 12:45:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:56.473 12:45:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.473 12:45:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.474 12:45:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:56.474 12:45:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:56.474 12:45:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:56.474 12:45:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.474 12:45:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:17:56.474 Found 0000:82:00.0 (0x8086 - 0x159b) 00:17:56.474 12:45:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.474 12:45:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:17:56.474 Found 0000:82:00.1 (0x8086 - 0x159b) 00:17:56.474 12:45:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:56.474 12:45:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:56.474 12:45:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.474 12:45:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.474 12:45:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:56.474 12:45:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.474 12:45:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:17:56.474 Found net devices under 0000:82:00.0: cvl_0_0 00:17:56.474 12:45:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.474 12:45:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.474 12:45:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.474 12:45:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:56.474 12:45:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.474 12:45:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:17:56.474 Found net devices under 0000:82:00.1: cvl_0_1 00:17:56.474 12:45:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.474 12:45:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:56.474 12:45:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:56.474 12:45:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:56.474 12:45:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:56.474 12:45:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:56.474 12:45:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.474 12:45:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.474 12:45:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.474 12:45:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:56.474 12:45:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.474 12:45:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.474 12:45:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:56.474 12:45:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.474 12:45:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.474 12:45:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:56.474 12:45:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:56.474 12:45:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.474 12:45:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.474 12:45:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.474 12:45:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.474 12:45:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:56.474 12:45:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.474 12:45:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.474 12:45:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.474 12:45:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:56.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:56.474 00:17:56.474 --- 10.0.0.2 ping statistics --- 00:17:56.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.474 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:56.474 12:45:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:17:56.474 00:17:56.474 --- 10.0.0.1 ping statistics --- 00:17:56.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.474 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:17:56.474 12:45:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.474 12:45:55 -- nvmf/common.sh@411 -- # return 0 00:17:56.474 12:45:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:56.474 12:45:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.474 12:45:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:56.474 12:45:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:56.474 12:45:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.474 12:45:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:56.474 12:45:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:56.474 12:45:55 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:56.474 12:45:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:56.474 12:45:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:56.474 12:45:55 -- common/autotest_common.sh@10 -- # set +x 00:17:56.474 12:45:55 -- nvmf/common.sh@470 -- # nvmfpid=1222023 00:17:56.474 12:45:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:56.474 12:45:55 -- nvmf/common.sh@471 -- # waitforlisten 1222023 00:17:56.474 12:45:55 -- common/autotest_common.sh@817 -- # '[' -z 1222023 ']' 00:17:56.474 12:45:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.474 12:45:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:56.474 12:45:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.474 12:45:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:56.474 12:45:55 -- common/autotest_common.sh@10 -- # set +x 00:17:56.474 [2024-04-16 12:45:55.210300] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:17:56.474 [2024-04-16 12:45:55.210388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.474 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.474 [2024-04-16 12:45:55.284490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.474 [2024-04-16 12:45:55.391257] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.474 [2024-04-16 12:45:55.391311] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.474 [2024-04-16 12:45:55.391341] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.474 [2024-04-16 12:45:55.391353] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.474 [2024-04-16 12:45:55.391363] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.474 [2024-04-16 12:45:55.391493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.474 [2024-04-16 12:45:55.391522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.474 [2024-04-16 12:45:55.391605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.474 [2024-04-16 12:45:55.391608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.474 12:45:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:56.474 12:45:55 -- common/autotest_common.sh@850 -- # return 0 00:17:56.474 12:45:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:56.474 12:45:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:56.474 12:45:55 -- common/autotest_common.sh@10 -- # set +x 00:17:56.732 12:45:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.732 12:45:55 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:56.732 12:45:55 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:18:00.011 12:45:58 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:18:00.011 12:45:58 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:00.011 12:45:58 -- host/perf.sh@30 -- # local_nvme_trid=0000:81:00.0 00:18:00.011 12:45:58 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.268 12:45:59 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:00.268 12:45:59 -- host/perf.sh@33 -- # '[' -n 0000:81:00.0 ']' 00:18:00.268 12:45:59 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:00.268 12:45:59 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:00.268 12:45:59 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.525 [2024-04-16 12:45:59.393891] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.525 12:45:59 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:00.783 12:45:59 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:00.783 12:45:59 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.040 12:45:59 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:01.040 12:45:59 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:01.298 12:46:00 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.555 [2024-04-16 12:46:00.373408] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.555 12:46:00 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:01.812 12:46:00 -- host/perf.sh@52 -- # '[' -n 0000:81:00.0 ']' 00:18:01.812 12:46:00 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:18:01.812 12:46:00 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:01.812 12:46:00 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:18:03.184 Initializing NVMe Controllers 00:18:03.184 Attached to NVMe Controller at 0000:81:00.0 [8086:0a54] 00:18:03.184 Associating PCIE (0000:81:00.0) NSID 1 with lcore 0 00:18:03.184 Initialization complete. Launching workers. 00:18:03.184 ======================================================== 00:18:03.184 Latency(us) 00:18:03.184 Device Information : IOPS MiB/s Average min max 00:18:03.184 PCIE (0000:81:00.0) NSID 1 from core 0: 86100.34 336.33 371.06 37.14 5259.44 00:18:03.184 ======================================================== 00:18:03.184 Total : 86100.34 336.33 371.06 37.14 5259.44 00:18:03.184 00:18:03.184 12:46:01 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:03.184 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.555 Initializing NVMe Controllers 00:18:04.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:04.555 Initialization complete. Launching workers. 00:18:04.555 ======================================================== 00:18:04.555 Latency(us) 00:18:04.555 Device Information : IOPS MiB/s Average min max 00:18:04.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.71 0.32 12200.06 188.99 45644.46 00:18:04.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.80 0.22 18445.04 4996.73 47907.63 00:18:04.555 ======================================================== 00:18:04.556 Total : 139.51 0.54 14742.66 188.99 47907.63 00:18:04.556 00:18:04.556 12:46:03 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:04.556 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.928 Initializing NVMe Controllers 00:18:05.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:05.928 Initialization complete. Launching workers. 00:18:05.928 ======================================================== 00:18:05.928 Latency(us) 00:18:05.928 Device Information : IOPS MiB/s Average min max 00:18:05.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8076.98 31.55 3973.06 439.58 8526.02 00:18:05.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3853.99 15.05 8341.84 5069.77 18188.04 00:18:05.928 ======================================================== 00:18:05.928 Total : 11930.98 46.61 5384.28 439.58 18188.04 00:18:05.928 00:18:05.928 12:46:04 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:18:05.928 12:46:04 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:18:05.929 12:46:04 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:05.929 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.455 Initializing NVMe Controllers 00:18:08.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.455 Controller IO queue size 128, less than required. 00:18:08.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.455 Controller IO queue size 128, less than required. 00:18:08.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:08.455 Initialization complete. Launching workers. 00:18:08.455 ======================================================== 00:18:08.455 Latency(us) 00:18:08.455 Device Information : IOPS MiB/s Average min max 00:18:08.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1020.28 255.07 128548.35 69393.29 223565.92 00:18:08.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 582.87 145.72 229935.14 128789.16 362117.30 00:18:08.455 ======================================================== 00:18:08.455 Total : 1603.16 400.79 165410.53 69393.29 362117.30 00:18:08.455 00:18:08.455 12:46:07 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:08.455 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.455 No valid NVMe controllers or AIO or URING devices found 00:18:08.712 Initializing NVMe Controllers 00:18:08.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.712 Controller IO queue size 128, less than required. 00:18:08.712 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.712 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:08.712 Controller IO queue size 128, less than required. 00:18:08.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.713 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:08.713 WARNING: Some requested NVMe devices were skipped 00:18:08.713 12:46:07 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:08.713 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.273 Initializing NVMe Controllers 00:18:11.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:11.273 Controller IO queue size 128, less than required. 00:18:11.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.273 Controller IO queue size 128, less than required. 00:18:11.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:11.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:11.273 Initialization complete. Launching workers. 00:18:11.273 00:18:11.273 ==================== 00:18:11.273 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:11.273 TCP transport: 00:18:11.273 polls: 20737 00:18:11.273 idle_polls: 7707 00:18:11.273 sock_completions: 13030 00:18:11.273 nvme_completions: 3759 00:18:11.273 submitted_requests: 5686 00:18:11.273 queued_requests: 1 00:18:11.273 00:18:11.273 ==================== 00:18:11.273 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:11.273 TCP transport: 00:18:11.273 polls: 22909 00:18:11.273 idle_polls: 7974 00:18:11.273 sock_completions: 14935 00:18:11.273 nvme_completions: 4355 00:18:11.273 submitted_requests: 6546 00:18:11.273 queued_requests: 1 00:18:11.273 ======================================================== 00:18:11.273 Latency(us) 00:18:11.273 Device Information : IOPS MiB/s Average min max 00:18:11.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 939.43 234.86 139419.92 78026.72 268302.20 00:18:11.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1088.42 272.10 121319.33 64568.15 198851.96 00:18:11.273 ======================================================== 00:18:11.273 Total : 2027.85 506.96 129704.69 64568.15 268302.20 00:18:11.273 00:18:11.273 12:46:10 -- host/perf.sh@66 -- # sync 00:18:11.273 12:46:10 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.531 12:46:10 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:11.531 12:46:10 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:11.531 12:46:10 -- host/perf.sh@114 -- # nvmftestfini 00:18:11.531 12:46:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:11.531 12:46:10 -- nvmf/common.sh@117 -- # sync 00:18:11.531 12:46:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:11.531 12:46:10 -- nvmf/common.sh@120 -- # set +e 00:18:11.531 12:46:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:11.531 12:46:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:11.531 rmmod nvme_tcp 00:18:11.531 rmmod nvme_fabrics 00:18:11.531 rmmod nvme_keyring 00:18:11.531 12:46:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:11.531 12:46:10 -- nvmf/common.sh@124 -- # set -e 00:18:11.531 12:46:10 -- nvmf/common.sh@125 -- # return 0 00:18:11.531 12:46:10 -- nvmf/common.sh@478 -- # '[' -n 1222023 ']' 00:18:11.531 12:46:10 -- nvmf/common.sh@479 -- # killprocess 1222023 00:18:11.531 12:46:10 -- common/autotest_common.sh@936 -- # '[' -z 1222023 ']' 00:18:11.531 12:46:10 -- common/autotest_common.sh@940 -- # kill -0 1222023 00:18:11.531 12:46:10 -- common/autotest_common.sh@941 -- # uname 00:18:11.531 12:46:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.531 12:46:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1222023 00:18:11.531 12:46:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:11.531 12:46:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:11.531 12:46:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1222023' 00:18:11.531 killing process with pid 1222023 00:18:11.531 12:46:10 -- common/autotest_common.sh@955 -- # kill 1222023 00:18:11.531 12:46:10 -- common/autotest_common.sh@960 -- # wait 1222023 00:18:14.058 12:46:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:14.058 12:46:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:14.058 12:46:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:14.058 12:46:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.058 12:46:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.058 12:46:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.058 12:46:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.058 12:46:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.960 12:46:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:16.219 00:18:16.219 real 0m22.496s 00:18:16.219 user 1m9.139s 00:18:16.219 sys 0m5.672s 00:18:16.219 12:46:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:16.219 12:46:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.219 ************************************ 00:18:16.219 END TEST nvmf_perf 00:18:16.219 ************************************ 00:18:16.219 12:46:15 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:16.219 12:46:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:16.219 12:46:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:16.219 12:46:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.219 ************************************ 00:18:16.219 START TEST nvmf_fio_host 00:18:16.219 ************************************ 00:18:16.219 12:46:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:16.219 * Looking for test storage... 00:18:16.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:16.219 12:46:15 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.219 12:46:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.219 12:46:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.219 12:46:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.219 12:46:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.219 12:46:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.220 12:46:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.220 12:46:15 -- paths/export.sh@5 -- # export PATH 00:18:16.220 12:46:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.220 12:46:15 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.220 12:46:15 -- nvmf/common.sh@7 -- # uname -s 00:18:16.220 12:46:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.220 12:46:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.220 12:46:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.220 12:46:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.220 12:46:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.220 12:46:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.220 12:46:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.220 12:46:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.220 12:46:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.220 12:46:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.220 12:46:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:16.220 12:46:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:18:16.220 12:46:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.220 12:46:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.220 12:46:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.220 12:46:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.220 12:46:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.220 12:46:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.220 12:46:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.220 12:46:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.220 12:46:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.220 12:46:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.220 12:46:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.220 12:46:15 -- paths/export.sh@5 -- # export PATH 00:18:16.220 12:46:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.220 12:46:15 -- nvmf/common.sh@47 -- # : 0 00:18:16.220 12:46:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.220 12:46:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.220 12:46:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.220 12:46:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.220 12:46:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.220 12:46:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.220 12:46:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.220 12:46:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.220 12:46:15 -- host/fio.sh@12 -- # nvmftestinit 00:18:16.220 12:46:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:16.220 12:46:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.220 12:46:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:16.220 12:46:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:16.220 12:46:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:16.220 12:46:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.220 12:46:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.220 12:46:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.220 12:46:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:16.220 12:46:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:16.220 12:46:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:16.220 12:46:15 -- common/autotest_common.sh@10 -- # set +x 00:18:18.751 12:46:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:18.751 12:46:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:18.751 12:46:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:18.751 12:46:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:18.751 12:46:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:18.751 12:46:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:18.751 12:46:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:18.751 12:46:17 -- nvmf/common.sh@295 -- # net_devs=() 00:18:18.751 12:46:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:18.751 12:46:17 -- nvmf/common.sh@296 -- # e810=() 00:18:18.751 12:46:17 -- nvmf/common.sh@296 -- # local -ga e810 00:18:18.751 12:46:17 -- nvmf/common.sh@297 -- # x722=() 00:18:18.751 12:46:17 -- nvmf/common.sh@297 -- # local -ga x722 00:18:18.751 12:46:17 -- nvmf/common.sh@298 -- # mlx=() 00:18:18.751 12:46:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:18.751 12:46:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.751 12:46:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:18.751 12:46:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:18.751 12:46:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:18.751 12:46:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.751 12:46:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:18:18.751 Found 0000:82:00.0 (0x8086 - 0x159b) 00:18:18.751 12:46:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.751 12:46:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:18:18.751 Found 0000:82:00.1 (0x8086 - 0x159b) 00:18:18.751 12:46:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:18.751 12:46:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.751 12:46:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.751 12:46:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:18.751 12:46:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.751 12:46:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:18:18.751 Found net devices under 0000:82:00.0: cvl_0_0 00:18:18.751 12:46:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.751 12:46:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.751 12:46:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.751 12:46:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:18.751 12:46:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.751 12:46:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:18:18.751 Found net devices under 0000:82:00.1: cvl_0_1 00:18:18.751 12:46:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.751 12:46:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:18.751 12:46:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:18.751 12:46:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:18.751 12:46:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:18.751 12:46:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.751 12:46:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.751 12:46:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.751 12:46:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:18.751 12:46:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.751 12:46:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.751 12:46:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:18.751 12:46:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.751 12:46:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.751 12:46:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:18.751 12:46:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:18.751 12:46:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.751 12:46:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.751 12:46:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.751 12:46:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.751 12:46:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:18.751 12:46:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.751 12:46:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.751 12:46:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.752 12:46:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:18.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:18:18.752 00:18:18.752 --- 10.0.0.2 ping statistics --- 00:18:18.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.752 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:18:18.752 12:46:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:18:18.752 00:18:18.752 --- 10.0.0.1 ping statistics --- 00:18:18.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.752 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:18:18.752 12:46:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.752 12:46:17 -- nvmf/common.sh@411 -- # return 0 00:18:18.752 12:46:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:18.752 12:46:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.752 12:46:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:18.752 12:46:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:18.752 12:46:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.752 12:46:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:18.752 12:46:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:19.010 12:46:17 -- host/fio.sh@14 -- # [[ y != y ]] 00:18:19.010 12:46:17 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:18:19.010 12:46:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:19.010 12:46:17 -- common/autotest_common.sh@10 -- # set +x 00:18:19.010 12:46:17 -- host/fio.sh@22 -- # nvmfpid=1226418 00:18:19.010 12:46:17 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.010 12:46:17 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.010 12:46:17 -- host/fio.sh@26 -- # waitforlisten 1226418 00:18:19.010 12:46:17 -- common/autotest_common.sh@817 -- # '[' -z 1226418 ']' 00:18:19.010 12:46:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.010 12:46:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.010 12:46:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.010 12:46:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.010 12:46:17 -- common/autotest_common.sh@10 -- # set +x 00:18:19.010 [2024-04-16 12:46:17.869996] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:18:19.010 [2024-04-16 12:46:17.870075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.010 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.010 [2024-04-16 12:46:17.949317] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.010 [2024-04-16 12:46:18.065265] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.010 [2024-04-16 12:46:18.065331] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.010 [2024-04-16 12:46:18.065349] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.010 [2024-04-16 12:46:18.065363] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.010 [2024-04-16 12:46:18.065375] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.010 [2024-04-16 12:46:18.065462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.010 [2024-04-16 12:46:18.065498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.010 [2024-04-16 12:46:18.065613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.010 [2024-04-16 12:46:18.065616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.943 12:46:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.943 12:46:18 -- common/autotest_common.sh@850 -- # return 0 00:18:19.943 12:46:18 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:19.943 12:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.943 12:46:18 -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 [2024-04-16 12:46:18.821418] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.943 12:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.943 12:46:18 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:18:19.943 12:46:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:19.943 12:46:18 -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 12:46:18 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:19.943 12:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.943 12:46:18 -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 Malloc1 00:18:19.943 12:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.943 12:46:18 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:19.943 12:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.943 12:46:18 -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 12:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.943 12:46:18 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:19.943 12:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.943 12:46:18 -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 12:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.943 12:46:18 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.943 12:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.943 12:46:18 -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 [2024-04-16 12:46:18.892359] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.943 12:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.943 12:46:18 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:19.943 12:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.943 12:46:18 -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 12:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.943 12:46:18 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:18:19.943 12:46:18 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:19.943 12:46:18 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:19.943 12:46:18 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:19.943 12:46:18 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:19.943 12:46:18 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:19.943 12:46:18 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:19.943 12:46:18 -- common/autotest_common.sh@1327 -- # shift 00:18:19.943 12:46:18 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:19.943 12:46:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.943 12:46:18 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:19.943 12:46:18 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:19.943 12:46:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:19.943 12:46:18 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:19.943 12:46:18 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:19.943 12:46:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.943 12:46:18 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:19.943 12:46:18 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:19.943 12:46:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:19.943 12:46:18 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:19.943 12:46:18 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:19.944 12:46:18 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:19.944 12:46:18 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:20.201 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:20.201 fio-3.35 00:18:20.201 Starting 1 thread 00:18:20.201 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.730 00:18:22.730 test: (groupid=0, jobs=1): err= 0: pid=1226757: Tue Apr 16 12:46:21 2024 00:18:22.730 read: IOPS=8802, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2006msec) 00:18:22.730 slat (usec): min=2, max=155, avg= 2.96, stdev= 2.38 00:18:22.730 clat (usec): min=3203, max=13281, avg=8024.12, stdev=623.69 00:18:22.730 lat (usec): min=3224, max=13284, avg=8027.08, stdev=623.62 00:18:22.730 clat percentiles (usec): 00:18:22.730 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7504], 00:18:22.730 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:18:22.730 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:18:22.730 | 99.00th=[ 9503], 99.50th=[ 9765], 99.90th=[11731], 99.95th=[12780], 00:18:22.730 | 99.99th=[13173] 00:18:22.730 bw ( KiB/s): min=34592, max=35544, per=99.92%, avg=35180.00, stdev=415.31, samples=4 00:18:22.730 iops : min= 8648, max= 8886, avg=8795.00, stdev=103.83, samples=4 00:18:22.730 write: IOPS=8812, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2006msec); 0 zone resets 00:18:22.730 slat (usec): min=2, max=154, avg= 3.14, stdev= 2.12 00:18:22.730 clat (usec): min=1536, max=12612, avg=6458.08, stdev=541.53 00:18:22.730 lat (usec): min=1543, max=12615, avg=6461.22, stdev=541.46 00:18:22.730 clat percentiles (usec): 00:18:22.730 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:18:22.730 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:18:22.730 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:18:22.730 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[11076], 99.95th=[11600], 00:18:22.730 | 99.99th=[11863] 00:18:22.730 bw ( KiB/s): min=34648, max=35944, per=99.96%, avg=35236.00, stdev=618.90, samples=4 00:18:22.730 iops : min= 8662, max= 8986, avg=8809.00, stdev=154.73, samples=4 00:18:22.730 lat (msec) : 2=0.02%, 4=0.13%, 10=99.62%, 20=0.24% 00:18:22.730 cpu : usr=64.24%, sys=32.02%, ctx=36, majf=0, minf=45 00:18:22.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:22.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.730 issued rwts: total=17657,17678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.730 00:18:22.730 Run status group 0 (all jobs): 00:18:22.730 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.3MB), run=2006-2006msec 00:18:22.730 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.4MB), run=2006-2006msec 00:18:22.730 12:46:21 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:22.730 12:46:21 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:22.730 12:46:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:22.730 12:46:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:22.730 12:46:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:22.730 12:46:21 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.730 12:46:21 -- common/autotest_common.sh@1327 -- # shift 00:18:22.730 12:46:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:22.730 12:46:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.730 12:46:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.730 12:46:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:22.730 12:46:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:22.730 12:46:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:22.730 12:46:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:22.730 12:46:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.730 12:46:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.730 12:46:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:22.730 12:46:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:22.730 12:46:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:22.730 12:46:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:22.730 12:46:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:22.730 12:46:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:22.730 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:22.730 fio-3.35 00:18:22.730 Starting 1 thread 00:18:22.730 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.267 00:18:25.267 test: (groupid=0, jobs=1): err= 0: pid=1227094: Tue Apr 16 12:46:24 2024 00:18:25.267 read: IOPS=8091, BW=126MiB/s (133MB/s)(254MiB/2009msec) 00:18:25.268 slat (usec): min=2, max=125, avg= 4.17, stdev= 2.40 00:18:25.268 clat (usec): min=2030, max=52741, avg=9462.21, stdev=4067.50 00:18:25.268 lat (usec): min=2034, max=52745, avg=9466.38, stdev=4067.52 00:18:25.268 clat percentiles (usec): 00:18:25.268 | 1.00th=[ 4752], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7308], 00:18:25.268 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9765], 00:18:25.268 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11994], 95.00th=[13042], 00:18:25.268 | 99.00th=[16450], 99.50th=[46400], 99.90th=[51643], 99.95th=[52167], 00:18:25.268 | 99.99th=[52691] 00:18:25.268 bw ( KiB/s): min=60416, max=70688, per=50.82%, avg=65792.00, stdev=5528.94, samples=4 00:18:25.268 iops : min= 3776, max= 4418, avg=4112.00, stdev=345.56, samples=4 00:18:25.268 write: IOPS=4752, BW=74.3MiB/s (77.9MB/s)(135MiB/1812msec); 0 zone resets 00:18:25.268 slat (usec): min=30, max=194, avg=37.81, stdev= 7.04 00:18:25.268 clat (usec): min=6686, max=17923, avg=11264.66, stdev=1956.54 00:18:25.268 lat (usec): min=6722, max=17958, avg=11302.47, stdev=1956.62 00:18:25.268 clat percentiles (usec): 00:18:25.268 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:18:25.268 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:18:25.268 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14091], 95.00th=[14877], 00:18:25.268 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17695], 99.95th=[17957], 00:18:25.268 | 99.99th=[17957] 00:18:25.268 bw ( KiB/s): min=61472, max=75200, per=90.26%, avg=68640.00, stdev=6624.39, samples=4 00:18:25.268 iops : min= 3842, max= 4700, avg=4290.00, stdev=414.02, samples=4 00:18:25.268 lat (msec) : 4=0.14%, 10=52.51%, 20=46.84%, 50=0.35%, 100=0.16% 00:18:25.268 cpu : usr=79.98%, sys=17.38%, ctx=64, majf=0, minf=71 00:18:25.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:25.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:25.268 issued rwts: total=16256,8612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:25.268 00:18:25.268 Run status group 0 (all jobs): 00:18:25.268 READ: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=254MiB (266MB), run=2009-2009msec 00:18:25.268 WRITE: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=135MiB (141MB), run=1812-1812msec 00:18:25.268 12:46:24 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.268 12:46:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.268 12:46:24 -- common/autotest_common.sh@10 -- # set +x 00:18:25.268 12:46:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.268 12:46:24 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:18:25.268 12:46:24 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:18:25.268 12:46:24 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:18:25.268 12:46:24 -- host/fio.sh@84 -- # nvmftestfini 00:18:25.268 12:46:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:25.268 12:46:24 -- nvmf/common.sh@117 -- # sync 00:18:25.268 12:46:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.268 12:46:24 -- nvmf/common.sh@120 -- # set +e 00:18:25.268 12:46:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.268 12:46:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.268 rmmod nvme_tcp 00:18:25.268 rmmod nvme_fabrics 00:18:25.268 rmmod nvme_keyring 00:18:25.268 12:46:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.268 12:46:24 -- nvmf/common.sh@124 -- # set -e 00:18:25.268 12:46:24 -- nvmf/common.sh@125 -- # return 0 00:18:25.268 12:46:24 -- nvmf/common.sh@478 -- # '[' -n 1226418 ']' 00:18:25.268 12:46:24 -- nvmf/common.sh@479 -- # killprocess 1226418 00:18:25.268 12:46:24 -- common/autotest_common.sh@936 -- # '[' -z 1226418 ']' 00:18:25.268 12:46:24 -- common/autotest_common.sh@940 -- # kill -0 1226418 00:18:25.268 12:46:24 -- common/autotest_common.sh@941 -- # uname 00:18:25.268 12:46:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.268 12:46:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1226418 00:18:25.268 12:46:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:25.268 12:46:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:25.268 12:46:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1226418' 00:18:25.268 killing process with pid 1226418 00:18:25.268 12:46:24 -- common/autotest_common.sh@955 -- # kill 1226418 00:18:25.268 12:46:24 -- common/autotest_common.sh@960 -- # wait 1226418 00:18:25.527 12:46:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:25.527 12:46:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:25.527 12:46:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:25.527 12:46:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.527 12:46:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.527 12:46:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.527 12:46:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.527 12:46:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.064 12:46:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.064 00:18:28.064 real 0m11.383s 00:18:28.064 user 0m30.062s 00:18:28.064 sys 0m3.980s 00:18:28.064 12:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:28.064 12:46:26 -- common/autotest_common.sh@10 -- # set +x 00:18:28.064 ************************************ 00:18:28.064 END TEST nvmf_fio_host 00:18:28.064 ************************************ 00:18:28.064 12:46:26 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:28.064 12:46:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:28.064 12:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.064 12:46:26 -- common/autotest_common.sh@10 -- # set +x 00:18:28.064 ************************************ 00:18:28.064 START TEST nvmf_failover 00:18:28.064 ************************************ 00:18:28.064 12:46:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:28.064 * Looking for test storage... 00:18:28.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:28.064 12:46:26 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.064 12:46:26 -- nvmf/common.sh@7 -- # uname -s 00:18:28.064 12:46:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.064 12:46:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.064 12:46:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.064 12:46:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.064 12:46:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.064 12:46:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.064 12:46:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.064 12:46:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.064 12:46:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.064 12:46:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.064 12:46:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:18:28.064 12:46:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:18:28.064 12:46:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.064 12:46:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.064 12:46:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.064 12:46:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.064 12:46:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.064 12:46:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.064 12:46:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.064 12:46:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.064 12:46:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.065 12:46:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.065 12:46:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.065 12:46:26 -- paths/export.sh@5 -- # export PATH 00:18:28.065 12:46:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.065 12:46:26 -- nvmf/common.sh@47 -- # : 0 00:18:28.065 12:46:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.065 12:46:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.065 12:46:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.065 12:46:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.065 12:46:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.065 12:46:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.065 12:46:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.065 12:46:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.065 12:46:26 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:28.065 12:46:26 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:28.065 12:46:26 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:28.065 12:46:26 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.065 12:46:26 -- host/failover.sh@18 -- # nvmftestinit 00:18:28.065 12:46:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:28.065 12:46:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.065 12:46:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:28.065 12:46:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:28.065 12:46:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:28.065 12:46:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.065 12:46:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.065 12:46:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.065 12:46:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:28.065 12:46:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:28.065 12:46:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.065 12:46:26 -- common/autotest_common.sh@10 -- # set +x 00:18:30.606 12:46:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:30.606 12:46:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.606 12:46:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.606 12:46:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.606 12:46:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.606 12:46:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.606 12:46:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.606 12:46:29 -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.606 12:46:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.606 12:46:29 -- nvmf/common.sh@296 -- # e810=() 00:18:30.606 12:46:29 -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.606 12:46:29 -- nvmf/common.sh@297 -- # x722=() 00:18:30.606 12:46:29 -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.606 12:46:29 -- nvmf/common.sh@298 -- # mlx=() 00:18:30.606 12:46:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.606 12:46:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.606 12:46:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.606 12:46:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.606 12:46:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.606 12:46:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.606 12:46:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:18:30.606 Found 0000:82:00.0 (0x8086 - 0x159b) 00:18:30.606 12:46:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.606 12:46:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:18:30.606 Found 0000:82:00.1 (0x8086 - 0x159b) 00:18:30.606 12:46:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.606 12:46:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.606 12:46:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.606 12:46:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:30.606 12:46:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.606 12:46:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:18:30.606 Found net devices under 0000:82:00.0: cvl_0_0 00:18:30.606 12:46:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.606 12:46:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.606 12:46:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.606 12:46:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:30.606 12:46:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.606 12:46:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:18:30.606 Found net devices under 0000:82:00.1: cvl_0_1 00:18:30.606 12:46:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.606 12:46:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:30.606 12:46:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:30.606 12:46:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:30.606 12:46:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.606 12:46:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.606 12:46:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.606 12:46:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:30.606 12:46:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:30.606 12:46:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:30.606 12:46:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:30.606 12:46:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:30.606 12:46:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.606 12:46:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:30.606 12:46:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:30.606 12:46:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:30.606 12:46:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:30.606 12:46:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:30.606 12:46:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:30.606 12:46:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:30.606 12:46:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:30.606 12:46:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:30.606 12:46:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:30.606 12:46:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:30.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:18:30.606 00:18:30.606 --- 10.0.0.2 ping statistics --- 00:18:30.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.606 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:18:30.606 12:46:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:30.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:18:30.606 00:18:30.606 --- 10.0.0.1 ping statistics --- 00:18:30.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.606 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:30.606 12:46:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.606 12:46:29 -- nvmf/common.sh@411 -- # return 0 00:18:30.606 12:46:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:30.606 12:46:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.606 12:46:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:30.606 12:46:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.606 12:46:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:30.606 12:46:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:30.606 12:46:29 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:30.606 12:46:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:30.606 12:46:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:30.606 12:46:29 -- common/autotest_common.sh@10 -- # set +x 00:18:30.606 12:46:29 -- nvmf/common.sh@470 -- # nvmfpid=1229672 00:18:30.606 12:46:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:30.606 12:46:29 -- nvmf/common.sh@471 -- # waitforlisten 1229672 00:18:30.606 12:46:29 -- common/autotest_common.sh@817 -- # '[' -z 1229672 ']' 00:18:30.606 12:46:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.606 12:46:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:30.606 12:46:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.606 12:46:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:30.606 12:46:29 -- common/autotest_common.sh@10 -- # set +x 00:18:30.606 [2024-04-16 12:46:29.332353] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:18:30.606 [2024-04-16 12:46:29.332444] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.606 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.606 [2024-04-16 12:46:29.409787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:30.606 [2024-04-16 12:46:29.523742] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.606 [2024-04-16 12:46:29.523803] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.606 [2024-04-16 12:46:29.523820] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.606 [2024-04-16 12:46:29.523834] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.606 [2024-04-16 12:46:29.523856] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.606 [2024-04-16 12:46:29.523962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.606 [2024-04-16 12:46:29.524052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:30.606 [2024-04-16 12:46:29.524056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.539 12:46:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:31.539 12:46:30 -- common/autotest_common.sh@850 -- # return 0 00:18:31.539 12:46:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:31.539 12:46:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:31.539 12:46:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.539 12:46:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.539 12:46:30 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:31.539 [2024-04-16 12:46:30.563447] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.539 12:46:30 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:31.797 Malloc0 00:18:32.054 12:46:30 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:32.312 12:46:31 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:32.570 12:46:31 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.827 [2024-04-16 12:46:31.665856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.827 12:46:31 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:33.085 [2024-04-16 12:46:31.958657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:33.085 12:46:31 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:33.343 [2024-04-16 12:46:32.247572] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:33.343 12:46:32 -- host/failover.sh@31 -- # bdevperf_pid=1230003 00:18:33.343 12:46:32 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:33.343 12:46:32 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:33.343 12:46:32 -- host/failover.sh@34 -- # waitforlisten 1230003 /var/tmp/bdevperf.sock 00:18:33.343 12:46:32 -- common/autotest_common.sh@817 -- # '[' -z 1230003 ']' 00:18:33.343 12:46:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.343 12:46:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:33.343 12:46:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.343 12:46:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:33.343 12:46:32 -- common/autotest_common.sh@10 -- # set +x 00:18:33.600 12:46:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:33.600 12:46:32 -- common/autotest_common.sh@850 -- # return 0 00:18:33.600 12:46:32 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:34.167 NVMe0n1 00:18:34.167 12:46:33 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:34.424 00:18:34.424 12:46:33 -- host/failover.sh@39 -- # run_test_pid=1230138 00:18:34.424 12:46:33 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:34.424 12:46:33 -- host/failover.sh@41 -- # sleep 1 00:18:35.358 12:46:34 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.616 [2024-04-16 12:46:34.627591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 [2024-04-16 12:46:34.627660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 [2024-04-16 12:46:34.627692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 [2024-04-16 12:46:34.627705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 [2024-04-16 12:46:34.627717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 [2024-04-16 12:46:34.627730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 [2024-04-16 12:46:34.627742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 [2024-04-16 12:46:34.627756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 [2024-04-16 12:46:34.627768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed0f0 is same with the state(5) to be set 00:18:35.616 12:46:34 -- host/failover.sh@45 -- # sleep 3 00:18:38.896 12:46:37 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:39.154 00:18:39.154 12:46:38 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:39.412 [2024-04-16 12:46:38.437004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.412 [2024-04-16 12:46:38.437385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 [2024-04-16 12:46:38.437905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ed8e0 is same with the state(5) to be set 00:18:39.413 12:46:38 -- host/failover.sh@50 -- # sleep 3 00:18:42.694 12:46:41 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.694 [2024-04-16 12:46:41.724013] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.694 12:46:41 -- host/failover.sh@55 -- # sleep 1 00:18:44.067 12:46:42 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:44.067 [2024-04-16 12:46:42.999611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.067 [2024-04-16 12:46:42.999684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.067 [2024-04-16 12:46:42.999716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.067 [2024-04-16 12:46:42.999729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.067 [2024-04-16 12:46:42.999742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.067 [2024-04-16 12:46:42.999755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.067 [2024-04-16 12:46:42.999768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.067 [2024-04-16 12:46:42.999780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:42.999987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 [2024-04-16 12:46:43.000168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593e40 is same with the state(5) to be set 00:18:44.068 12:46:43 -- host/failover.sh@59 -- # wait 1230138 00:18:50.691 0 00:18:50.691 12:46:48 -- host/failover.sh@61 -- # killprocess 1230003 00:18:50.691 12:46:48 -- common/autotest_common.sh@936 -- # '[' -z 1230003 ']' 00:18:50.691 12:46:48 -- common/autotest_common.sh@940 -- # kill -0 1230003 00:18:50.691 12:46:48 -- common/autotest_common.sh@941 -- # uname 00:18:50.691 12:46:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.691 12:46:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1230003 00:18:50.691 12:46:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:50.691 12:46:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:50.691 12:46:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1230003' 00:18:50.691 killing process with pid 1230003 00:18:50.691 12:46:48 -- common/autotest_common.sh@955 -- # kill 1230003 00:18:50.691 12:46:48 -- common/autotest_common.sh@960 -- # wait 1230003 00:18:50.691 12:46:48 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:50.692 [2024-04-16 12:46:32.308068] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:18:50.692 [2024-04-16 12:46:32.308151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230003 ] 00:18:50.692 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.692 [2024-04-16 12:46:32.379018] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.692 [2024-04-16 12:46:32.487556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.692 Running I/O for 15 seconds... 00:18:50.692 [2024-04-16 12:46:34.628690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.628734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.628763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.628780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.628797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.628812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.628828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.628843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.628875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.628889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.628904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.628928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.628943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.628956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.628971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.628993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.692 [2024-04-16 12:46:34.629873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.692 [2024-04-16 12:46:34.629887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.629906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.693 [2024-04-16 12:46:34.629920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.629934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.693 [2024-04-16 12:46:34.629947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.629962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.693 [2024-04-16 12:46:34.629975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.629991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.630979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.630994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.631007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.631022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.631035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.631057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.693 [2024-04-16 12:46:34.631071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.693 [2024-04-16 12:46:34.631086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.631972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.631985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.694 [2024-04-16 12:46:34.632267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.694 [2024-04-16 12:46:34.632280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.695 [2024-04-16 12:46:34.632309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.695 [2024-04-16 12:46:34.632338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.695 [2024-04-16 12:46:34.632367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.695 [2024-04-16 12:46:34.632396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80480 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.695 [2024-04-16 12:46:34.632505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80488 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.695 [2024-04-16 12:46:34.632568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80496 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.695 [2024-04-16 12:46:34.632622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80504 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.695 [2024-04-16 12:46:34.632677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80512 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.695 [2024-04-16 12:46:34.632725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80520 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.695 [2024-04-16 12:46:34.632773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80528 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.695 [2024-04-16 12:46:34.632821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.695 [2024-04-16 12:46:34.632870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.695 [2024-04-16 12:46:34.632882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79832 len:8 PRP1 0x0 PRP2 0x0 00:18:50.695 [2024-04-16 12:46:34.632894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.632951] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2231f70 was disconnected and freed. reset controller. 00:18:50.695 [2024-04-16 12:46:34.632970] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:50.695 [2024-04-16 12:46:34.633002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.695 [2024-04-16 12:46:34.633020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.633035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.695 [2024-04-16 12:46:34.633048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.633062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.695 [2024-04-16 12:46:34.633075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.633092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.695 [2024-04-16 12:46:34.633106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:34.633119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.695 [2024-04-16 12:46:34.633176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213240 (9): Bad file descriptor 00:18:50.695 [2024-04-16 12:46:34.636423] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.695 [2024-04-16 12:46:34.678882] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.695 [2024-04-16 12:46:38.437278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.695 [2024-04-16 12:46:38.437329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.437346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.695 [2024-04-16 12:46:38.437361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.437386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.695 [2024-04-16 12:46:38.437400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.437414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.695 [2024-04-16 12:46:38.437428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.437441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213240 is same with the state(5) to be set 00:18:50.695 [2024-04-16 12:46:38.438190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.695 [2024-04-16 12:46:38.438482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.695 [2024-04-16 12:46:38.438496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.438972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.438986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.696 [2024-04-16 12:46:38.439416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.696 [2024-04-16 12:46:38.439431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.697 [2024-04-16 12:46:38.439880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.439910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.439944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.439974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.439989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.697 [2024-04-16 12:46:38.440621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.697 [2024-04-16 12:46:38.440635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.440974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.440988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.698 [2024-04-16 12:46:38.441818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.698 [2024-04-16 12:46:38.441832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.441847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.699 [2024-04-16 12:46:38.441861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.441877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.699 [2024-04-16 12:46:38.441891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.441906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.699 [2024-04-16 12:46:38.441920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.441935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.699 [2024-04-16 12:46:38.441949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.441964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.699 [2024-04-16 12:46:38.441977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.441993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.699 [2024-04-16 12:46:38.442007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.442022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.699 [2024-04-16 12:46:38.442036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.442050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f6f0 is same with the state(5) to be set 00:18:50.699 [2024-04-16 12:46:38.442066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.699 [2024-04-16 12:46:38.442078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.699 [2024-04-16 12:46:38.442089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95640 len:8 PRP1 0x0 PRP2 0x0 00:18:50.699 [2024-04-16 12:46:38.442102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:38.442166] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x221f6f0 was disconnected and freed. reset controller. 00:18:50.699 [2024-04-16 12:46:38.442184] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:50.699 [2024-04-16 12:46:38.442198] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.699 [2024-04-16 12:46:38.445404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.699 [2024-04-16 12:46:38.445443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213240 (9): Bad file descriptor 00:18:50.699 [2024-04-16 12:46:38.484623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.699 [2024-04-16 12:46:43.001169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.001969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.001987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.002003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.002017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.699 [2024-04-16 12:46:43.002032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.699 [2024-04-16 12:46:43.002046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.700 [2024-04-16 12:46:43.002075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.700 [2024-04-16 12:46:43.002104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.700 [2024-04-16 12:46:43.002132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.002974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.002989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.003004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.003019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.003033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.003048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.003062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.003077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.003091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.700 [2024-04-16 12:46:43.003110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.700 [2024-04-16 12:46:43.003124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.701 [2024-04-16 12:46:43.003570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.003632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28184 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.003646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.003676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.003687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28192 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.003700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.003725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.003737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28200 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.003750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.003783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.003794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28208 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.003808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.003833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.003844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28216 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.003858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.003883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.003894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28224 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.003907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.003939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.003951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28232 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.003964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.003977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.003988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.004000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28240 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.004013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.004026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.004037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.004048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28248 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.004061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.004074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.004085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.004097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28256 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.004110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.004123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.004134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.004145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28264 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.004158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.004174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.701 [2024-04-16 12:46:43.004186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.701 [2024-04-16 12:46:43.004197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28272 len:8 PRP1 0x0 PRP2 0x0 00:18:50.701 [2024-04-16 12:46:43.004210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.701 [2024-04-16 12:46:43.004223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28280 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28288 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28296 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28304 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28312 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28320 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28328 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28336 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28344 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28352 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28360 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28368 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28376 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28384 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28392 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.004953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.004967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.004978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.004989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28400 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.005002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.005015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.005027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.005038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28408 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.005051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.005067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.005079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.005090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28416 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.005103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.005116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.005127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.005138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28424 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.005152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.005165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.005176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.005188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28432 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.005200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.005214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.005225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.005237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28440 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.005250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.005263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.005274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.005286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28448 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.005299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.005312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.005324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.005335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28456 len:8 PRP1 0x0 PRP2 0x0 00:18:50.702 [2024-04-16 12:46:43.005348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.702 [2024-04-16 12:46:43.005361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.702 [2024-04-16 12:46:43.005373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.702 [2024-04-16 12:46:43.005384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28464 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28472 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28480 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28488 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28496 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28504 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28512 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28520 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28528 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28536 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28544 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28552 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.005957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.005970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.005981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.005992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27784 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.006005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.006018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.703 [2024-04-16 12:46:43.006029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.703 [2024-04-16 12:46:43.006040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27792 len:8 PRP1 0x0 PRP2 0x0 00:18:50.703 [2024-04-16 12:46:43.006053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.006109] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2235de0 was disconnected and freed. reset controller. 00:18:50.703 [2024-04-16 12:46:43.006128] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:50.703 [2024-04-16 12:46:43.006159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.703 [2024-04-16 12:46:43.006180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.006196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.703 [2024-04-16 12:46:43.006209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.006222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.703 [2024-04-16 12:46:43.006236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.006250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.703 [2024-04-16 12:46:43.006263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.703 [2024-04-16 12:46:43.006276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:50.703 [2024-04-16 12:46:43.006338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213240 (9): Bad file descriptor 00:18:50.703 [2024-04-16 12:46:43.009534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.703 [2024-04-16 12:46:43.092792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.703 00:18:50.703 Latency(us) 00:18:50.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.703 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:50.703 Verification LBA range: start 0x0 length 0x4000 00:18:50.703 NVMe0n1 : 15.01 8607.94 33.62 427.27 0.00 14140.47 819.20 17476.27 00:18:50.703 =================================================================================================================== 00:18:50.703 Total : 8607.94 33.62 427.27 0.00 14140.47 819.20 17476.27 00:18:50.703 Received shutdown signal, test time was about 15.000000 seconds 00:18:50.703 00:18:50.703 Latency(us) 00:18:50.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.703 =================================================================================================================== 00:18:50.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.703 12:46:48 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:50.703 12:46:48 -- host/failover.sh@65 -- # count=3 00:18:50.703 12:46:48 -- host/failover.sh@67 -- # (( count != 3 )) 00:18:50.703 12:46:48 -- host/failover.sh@73 -- # bdevperf_pid=1231984 00:18:50.703 12:46:48 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:50.703 12:46:48 -- host/failover.sh@75 -- # waitforlisten 1231984 /var/tmp/bdevperf.sock 00:18:50.703 12:46:48 -- common/autotest_common.sh@817 -- # '[' -z 1231984 ']' 00:18:50.703 12:46:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.703 12:46:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.703 12:46:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.703 12:46:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.703 12:46:48 -- common/autotest_common.sh@10 -- # set +x 00:18:50.703 12:46:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:50.703 12:46:49 -- common/autotest_common.sh@850 -- # return 0 00:18:50.703 12:46:49 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:50.703 [2024-04-16 12:46:49.430364] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:50.703 12:46:49 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:50.704 [2024-04-16 12:46:49.699123] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:50.704 12:46:49 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:51.269 NVMe0n1 00:18:51.269 12:46:50 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:51.525 00:18:51.525 12:46:50 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:52.089 00:18:52.089 12:46:50 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:52.089 12:46:50 -- host/failover.sh@82 -- # grep -q NVMe0 00:18:52.347 12:46:51 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:52.605 12:46:51 -- host/failover.sh@87 -- # sleep 3 00:18:55.882 12:46:54 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:55.882 12:46:54 -- host/failover.sh@88 -- # grep -q NVMe0 00:18:55.882 12:46:54 -- host/failover.sh@90 -- # run_test_pid=1232658 00:18:55.882 12:46:54 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:55.882 12:46:54 -- host/failover.sh@92 -- # wait 1232658 00:18:56.816 0 00:18:56.816 12:46:55 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:56.816 [2024-04-16 12:46:48.872004] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:18:56.816 [2024-04-16 12:46:48.872081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231984 ] 00:18:56.816 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.816 [2024-04-16 12:46:48.940869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.816 [2024-04-16 12:46:49.043500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.816 [2024-04-16 12:46:51.465630] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:56.816 [2024-04-16 12:46:51.465718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.816 [2024-04-16 12:46:51.465743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.816 [2024-04-16 12:46:51.465759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.816 [2024-04-16 12:46:51.465774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.816 [2024-04-16 12:46:51.465787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.816 [2024-04-16 12:46:51.465801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.816 [2024-04-16 12:46:51.465815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.816 [2024-04-16 12:46:51.465829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.816 [2024-04-16 12:46:51.465852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:56.816 [2024-04-16 12:46:51.465913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.816 [2024-04-16 12:46:51.465944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1901240 (9): Bad file descriptor 00:18:56.816 [2024-04-16 12:46:51.556779] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:56.816 Running I/O for 1 seconds... 00:18:56.816 00:18:56.816 Latency(us) 00:18:56.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.816 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:56.816 Verification LBA range: start 0x0 length 0x4000 00:18:56.816 NVMe0n1 : 1.01 8635.57 33.73 0.00 0.00 14762.44 2985.53 16311.18 00:18:56.816 =================================================================================================================== 00:18:56.816 Total : 8635.57 33.73 0.00 0.00 14762.44 2985.53 16311.18 00:18:56.816 12:46:55 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:56.816 12:46:55 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:57.381 12:46:56 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:57.381 12:46:56 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.381 12:46:56 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:57.638 12:46:56 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:57.895 12:46:56 -- host/failover.sh@101 -- # sleep 3 00:19:01.174 12:46:59 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:01.174 12:46:59 -- host/failover.sh@103 -- # grep -q NVMe0 00:19:01.174 12:47:00 -- host/failover.sh@108 -- # killprocess 1231984 00:19:01.174 12:47:00 -- common/autotest_common.sh@936 -- # '[' -z 1231984 ']' 00:19:01.174 12:47:00 -- common/autotest_common.sh@940 -- # kill -0 1231984 00:19:01.174 12:47:00 -- common/autotest_common.sh@941 -- # uname 00:19:01.174 12:47:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.174 12:47:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1231984 00:19:01.174 12:47:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:01.174 12:47:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:01.174 12:47:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1231984' 00:19:01.174 killing process with pid 1231984 00:19:01.174 12:47:00 -- common/autotest_common.sh@955 -- # kill 1231984 00:19:01.174 12:47:00 -- common/autotest_common.sh@960 -- # wait 1231984 00:19:01.432 12:47:00 -- host/failover.sh@110 -- # sync 00:19:01.432 12:47:00 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.690 12:47:00 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:01.690 12:47:00 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:01.690 12:47:00 -- host/failover.sh@116 -- # nvmftestfini 00:19:01.690 12:47:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:01.690 12:47:00 -- nvmf/common.sh@117 -- # sync 00:19:01.690 12:47:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.690 12:47:00 -- nvmf/common.sh@120 -- # set +e 00:19:01.690 12:47:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.690 12:47:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.690 rmmod nvme_tcp 00:19:01.690 rmmod nvme_fabrics 00:19:01.690 rmmod nvme_keyring 00:19:01.690 12:47:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.690 12:47:00 -- nvmf/common.sh@124 -- # set -e 00:19:01.690 12:47:00 -- nvmf/common.sh@125 -- # return 0 00:19:01.690 12:47:00 -- nvmf/common.sh@478 -- # '[' -n 1229672 ']' 00:19:01.690 12:47:00 -- nvmf/common.sh@479 -- # killprocess 1229672 00:19:01.690 12:47:00 -- common/autotest_common.sh@936 -- # '[' -z 1229672 ']' 00:19:01.690 12:47:00 -- common/autotest_common.sh@940 -- # kill -0 1229672 00:19:01.690 12:47:00 -- common/autotest_common.sh@941 -- # uname 00:19:01.690 12:47:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.690 12:47:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1229672 00:19:01.947 12:47:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:01.947 12:47:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:01.947 12:47:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1229672' 00:19:01.947 killing process with pid 1229672 00:19:01.947 12:47:00 -- common/autotest_common.sh@955 -- # kill 1229672 00:19:01.947 12:47:00 -- common/autotest_common.sh@960 -- # wait 1229672 00:19:02.207 12:47:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:02.207 12:47:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:02.207 12:47:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:02.207 12:47:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.207 12:47:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:02.207 12:47:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.207 12:47:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.207 12:47:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.109 12:47:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:04.109 00:19:04.109 real 0m36.440s 00:19:04.109 user 2m6.878s 00:19:04.109 sys 0m6.620s 00:19:04.109 12:47:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:04.109 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:04.109 ************************************ 00:19:04.109 END TEST nvmf_failover 00:19:04.109 ************************************ 00:19:04.109 12:47:03 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:04.109 12:47:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:04.109 12:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:04.109 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:04.367 ************************************ 00:19:04.367 START TEST nvmf_discovery 00:19:04.367 ************************************ 00:19:04.368 12:47:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:04.368 * Looking for test storage... 00:19:04.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:04.368 12:47:03 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.368 12:47:03 -- nvmf/common.sh@7 -- # uname -s 00:19:04.368 12:47:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.368 12:47:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.368 12:47:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.368 12:47:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.368 12:47:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.368 12:47:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.368 12:47:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.368 12:47:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.368 12:47:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.368 12:47:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.368 12:47:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:04.368 12:47:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:04.368 12:47:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.368 12:47:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.368 12:47:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.368 12:47:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.368 12:47:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.368 12:47:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.368 12:47:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.368 12:47:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.368 12:47:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.368 12:47:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.368 12:47:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.368 12:47:03 -- paths/export.sh@5 -- # export PATH 00:19:04.368 12:47:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.368 12:47:03 -- nvmf/common.sh@47 -- # : 0 00:19:04.368 12:47:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.368 12:47:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.368 12:47:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.368 12:47:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.368 12:47:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.368 12:47:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.368 12:47:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.368 12:47:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.368 12:47:03 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:04.368 12:47:03 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:04.368 12:47:03 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:04.368 12:47:03 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:04.368 12:47:03 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:04.368 12:47:03 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:04.368 12:47:03 -- host/discovery.sh@25 -- # nvmftestinit 00:19:04.368 12:47:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:04.368 12:47:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.368 12:47:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:04.368 12:47:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:04.368 12:47:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:04.368 12:47:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.368 12:47:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.368 12:47:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.368 12:47:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:04.368 12:47:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:04.368 12:47:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.368 12:47:03 -- common/autotest_common.sh@10 -- # set +x 00:19:06.897 12:47:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:06.897 12:47:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.897 12:47:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.897 12:47:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.897 12:47:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.897 12:47:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.897 12:47:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.897 12:47:05 -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.897 12:47:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.897 12:47:05 -- nvmf/common.sh@296 -- # e810=() 00:19:06.897 12:47:05 -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.897 12:47:05 -- nvmf/common.sh@297 -- # x722=() 00:19:06.897 12:47:05 -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.897 12:47:05 -- nvmf/common.sh@298 -- # mlx=() 00:19:06.897 12:47:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.897 12:47:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.897 12:47:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.897 12:47:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.897 12:47:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.897 12:47:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.897 12:47:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:06.897 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:06.897 12:47:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.897 12:47:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:06.897 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:06.897 12:47:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.897 12:47:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.897 12:47:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.897 12:47:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:06.897 12:47:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.897 12:47:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:06.897 Found net devices under 0000:82:00.0: cvl_0_0 00:19:06.897 12:47:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.897 12:47:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.897 12:47:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.897 12:47:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:06.897 12:47:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.897 12:47:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:06.897 Found net devices under 0000:82:00.1: cvl_0_1 00:19:06.897 12:47:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.897 12:47:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:06.897 12:47:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:06.897 12:47:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:06.897 12:47:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:06.897 12:47:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.897 12:47:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.897 12:47:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.897 12:47:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.897 12:47:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.897 12:47:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.898 12:47:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.898 12:47:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.898 12:47:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.898 12:47:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.898 12:47:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.898 12:47:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.898 12:47:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.898 12:47:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.898 12:47:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.898 12:47:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.898 12:47:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.898 12:47:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.898 12:47:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.898 12:47:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:19:06.898 00:19:06.898 --- 10.0.0.2 ping statistics --- 00:19:06.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.898 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:19:06.898 12:47:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:19:06.898 00:19:06.898 --- 10.0.0.1 ping statistics --- 00:19:06.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.898 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:19:06.898 12:47:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.898 12:47:05 -- nvmf/common.sh@411 -- # return 0 00:19:06.898 12:47:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:06.898 12:47:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.898 12:47:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:06.898 12:47:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:06.898 12:47:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.898 12:47:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:06.898 12:47:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:06.898 12:47:05 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:06.898 12:47:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:06.898 12:47:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:06.898 12:47:05 -- common/autotest_common.sh@10 -- # set +x 00:19:06.898 12:47:05 -- nvmf/common.sh@470 -- # nvmfpid=1235681 00:19:06.898 12:47:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:06.898 12:47:05 -- nvmf/common.sh@471 -- # waitforlisten 1235681 00:19:06.898 12:47:05 -- common/autotest_common.sh@817 -- # '[' -z 1235681 ']' 00:19:06.898 12:47:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.898 12:47:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:06.898 12:47:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.898 12:47:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:06.898 12:47:05 -- common/autotest_common.sh@10 -- # set +x 00:19:06.898 [2024-04-16 12:47:05.825004] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:19:06.898 [2024-04-16 12:47:05.825096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.898 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.898 [2024-04-16 12:47:05.904096] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.157 [2024-04-16 12:47:06.016910] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.157 [2024-04-16 12:47:06.016987] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.157 [2024-04-16 12:47:06.017003] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.157 [2024-04-16 12:47:06.017017] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.157 [2024-04-16 12:47:06.017029] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.157 [2024-04-16 12:47:06.017064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.722 12:47:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:07.722 12:47:06 -- common/autotest_common.sh@850 -- # return 0 00:19:07.722 12:47:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:07.722 12:47:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:07.722 12:47:06 -- common/autotest_common.sh@10 -- # set +x 00:19:07.979 12:47:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.979 12:47:06 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.979 12:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.979 12:47:06 -- common/autotest_common.sh@10 -- # set +x 00:19:07.979 [2024-04-16 12:47:06.800807] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.979 12:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.979 12:47:06 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:07.979 12:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.979 12:47:06 -- common/autotest_common.sh@10 -- # set +x 00:19:07.979 [2024-04-16 12:47:06.809000] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:07.979 12:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.979 12:47:06 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:07.979 12:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.979 12:47:06 -- common/autotest_common.sh@10 -- # set +x 00:19:07.979 null0 00:19:07.979 12:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.979 12:47:06 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:07.979 12:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.980 12:47:06 -- common/autotest_common.sh@10 -- # set +x 00:19:07.980 null1 00:19:07.980 12:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.980 12:47:06 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:07.980 12:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.980 12:47:06 -- common/autotest_common.sh@10 -- # set +x 00:19:07.980 12:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.980 12:47:06 -- host/discovery.sh@45 -- # hostpid=1235834 00:19:07.980 12:47:06 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:07.980 12:47:06 -- host/discovery.sh@46 -- # waitforlisten 1235834 /tmp/host.sock 00:19:07.980 12:47:06 -- common/autotest_common.sh@817 -- # '[' -z 1235834 ']' 00:19:07.980 12:47:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:07.980 12:47:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:07.980 12:47:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:07.980 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:07.980 12:47:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:07.980 12:47:06 -- common/autotest_common.sh@10 -- # set +x 00:19:07.980 [2024-04-16 12:47:06.880444] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:19:07.980 [2024-04-16 12:47:06.880509] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235834 ] 00:19:07.980 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.980 [2024-04-16 12:47:06.951269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.237 [2024-04-16 12:47:07.064767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.803 12:47:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:08.803 12:47:07 -- common/autotest_common.sh@850 -- # return 0 00:19:08.803 12:47:07 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:08.803 12:47:07 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:08.803 12:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.803 12:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:08.803 12:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.803 12:47:07 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:08.803 12:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.803 12:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:08.803 12:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.803 12:47:07 -- host/discovery.sh@72 -- # notify_id=0 00:19:08.803 12:47:07 -- host/discovery.sh@83 -- # get_subsystem_names 00:19:08.803 12:47:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:08.803 12:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.803 12:47:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:08.803 12:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:08.803 12:47:07 -- host/discovery.sh@59 -- # sort 00:19:08.803 12:47:07 -- host/discovery.sh@59 -- # xargs 00:19:08.803 12:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:07 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:09.061 12:47:07 -- host/discovery.sh@84 -- # get_bdev_list 00:19:09.061 12:47:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:09.061 12:47:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:09.061 12:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 12:47:07 -- host/discovery.sh@55 -- # sort 00:19:09.061 12:47:07 -- host/discovery.sh@55 -- # xargs 00:19:09.061 12:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:07 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:09.061 12:47:07 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:09.061 12:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 12:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:07 -- host/discovery.sh@87 -- # get_subsystem_names 00:19:09.061 12:47:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:09.061 12:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 12:47:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:09.061 12:47:07 -- host/discovery.sh@59 -- # sort 00:19:09.061 12:47:07 -- host/discovery.sh@59 -- # xargs 00:19:09.061 12:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:07 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:09.061 12:47:07 -- host/discovery.sh@88 -- # get_bdev_list 00:19:09.061 12:47:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:09.061 12:47:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:09.061 12:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:07 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 12:47:07 -- host/discovery.sh@55 -- # sort 00:19:09.061 12:47:07 -- host/discovery.sh@55 -- # xargs 00:19:09.061 12:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:08 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:09.061 12:47:08 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:09.061 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:08 -- host/discovery.sh@91 -- # get_subsystem_names 00:19:09.061 12:47:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:09.061 12:47:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:09.061 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 12:47:08 -- host/discovery.sh@59 -- # sort 00:19:09.061 12:47:08 -- host/discovery.sh@59 -- # xargs 00:19:09.061 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:08 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:09.061 12:47:08 -- host/discovery.sh@92 -- # get_bdev_list 00:19:09.061 12:47:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:09.061 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:09.061 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 12:47:08 -- host/discovery.sh@55 -- # sort 00:19:09.061 12:47:08 -- host/discovery.sh@55 -- # xargs 00:19:09.061 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:08 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:09.061 12:47:08 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:09.061 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 [2024-04-16 12:47:08.100518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.061 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.061 12:47:08 -- host/discovery.sh@97 -- # get_subsystem_names 00:19:09.061 12:47:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:09.061 12:47:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:09.061 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.061 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 12:47:08 -- host/discovery.sh@59 -- # sort 00:19:09.061 12:47:08 -- host/discovery.sh@59 -- # xargs 00:19:09.061 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.319 12:47:08 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:09.319 12:47:08 -- host/discovery.sh@98 -- # get_bdev_list 00:19:09.319 12:47:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:09.319 12:47:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:09.319 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.319 12:47:08 -- host/discovery.sh@55 -- # sort 00:19:09.319 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.319 12:47:08 -- host/discovery.sh@55 -- # xargs 00:19:09.319 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.319 12:47:08 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:09.319 12:47:08 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:09.319 12:47:08 -- host/discovery.sh@79 -- # expected_count=0 00:19:09.319 12:47:08 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:09.319 12:47:08 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:09.319 12:47:08 -- common/autotest_common.sh@901 -- # local max=10 00:19:09.319 12:47:08 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:09.319 12:47:08 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:09.319 12:47:08 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:09.319 12:47:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:09.319 12:47:08 -- host/discovery.sh@74 -- # jq '. | length' 00:19:09.319 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.319 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.319 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.319 12:47:08 -- host/discovery.sh@74 -- # notification_count=0 00:19:09.319 12:47:08 -- host/discovery.sh@75 -- # notify_id=0 00:19:09.319 12:47:08 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:09.319 12:47:08 -- common/autotest_common.sh@904 -- # return 0 00:19:09.319 12:47:08 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:09.319 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.319 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.319 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.319 12:47:08 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:09.319 12:47:08 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:09.319 12:47:08 -- common/autotest_common.sh@901 -- # local max=10 00:19:09.319 12:47:08 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:09.319 12:47:08 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:09.319 12:47:08 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:09.319 12:47:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:09.319 12:47:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:09.319 12:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.319 12:47:08 -- common/autotest_common.sh@10 -- # set +x 00:19:09.319 12:47:08 -- host/discovery.sh@59 -- # sort 00:19:09.319 12:47:08 -- host/discovery.sh@59 -- # xargs 00:19:09.319 12:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.319 12:47:08 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:19:09.319 12:47:08 -- common/autotest_common.sh@906 -- # sleep 1 00:19:09.887 [2024-04-16 12:47:08.840070] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:09.887 [2024-04-16 12:47:08.840104] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:09.887 [2024-04-16 12:47:08.840130] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:09.887 [2024-04-16 12:47:08.928396] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:10.156 [2024-04-16 12:47:09.031386] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:10.156 [2024-04-16 12:47:09.031415] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:10.414 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:10.414 12:47:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.414 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.414 12:47:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:10.414 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.414 12:47:09 -- host/discovery.sh@59 -- # sort 00:19:10.414 12:47:09 -- host/discovery.sh@59 -- # xargs 00:19:10.414 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.414 12:47:09 -- common/autotest_common.sh@904 -- # return 0 00:19:10.414 12:47:09 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:10.414 12:47:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:10.414 12:47:09 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.414 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:10.414 12:47:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.414 12:47:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:10.414 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.414 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.414 12:47:09 -- host/discovery.sh@55 -- # sort 00:19:10.414 12:47:09 -- host/discovery.sh@55 -- # xargs 00:19:10.414 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:10.414 12:47:09 -- common/autotest_common.sh@904 -- # return 0 00:19:10.414 12:47:09 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:10.414 12:47:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:10.414 12:47:09 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.414 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:10.414 12:47:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:10.414 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.414 12:47:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:10.414 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.414 12:47:09 -- host/discovery.sh@63 -- # sort -n 00:19:10.414 12:47:09 -- host/discovery.sh@63 -- # xargs 00:19:10.414 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.414 12:47:09 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:19:10.414 12:47:09 -- common/autotest_common.sh@904 -- # return 0 00:19:10.414 12:47:09 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:10.414 12:47:09 -- host/discovery.sh@79 -- # expected_count=1 00:19:10.414 12:47:09 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:10.415 12:47:09 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:10.415 12:47:09 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.415 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.415 12:47:09 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:10.415 12:47:09 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:10.415 12:47:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:10.415 12:47:09 -- host/discovery.sh@74 -- # jq '. | length' 00:19:10.415 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.415 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.415 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.415 12:47:09 -- host/discovery.sh@74 -- # notification_count=1 00:19:10.415 12:47:09 -- host/discovery.sh@75 -- # notify_id=1 00:19:10.415 12:47:09 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:10.415 12:47:09 -- common/autotest_common.sh@904 -- # return 0 00:19:10.415 12:47:09 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:10.415 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.415 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.415 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.415 12:47:09 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:10.415 12:47:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:10.415 12:47:09 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.415 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.415 12:47:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:10.415 12:47:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:10.673 12:47:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.673 12:47:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:10.673 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.673 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 12:47:09 -- host/discovery.sh@55 -- # sort 00:19:10.673 12:47:09 -- host/discovery.sh@55 -- # xargs 00:19:10.673 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:10.673 12:47:09 -- common/autotest_common.sh@904 -- # return 0 00:19:10.673 12:47:09 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:10.673 12:47:09 -- host/discovery.sh@79 -- # expected_count=1 00:19:10.673 12:47:09 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:10.673 12:47:09 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:10.673 12:47:09 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.673 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:10.673 12:47:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:10.673 12:47:09 -- host/discovery.sh@74 -- # jq '. | length' 00:19:10.673 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.673 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.673 12:47:09 -- host/discovery.sh@74 -- # notification_count=1 00:19:10.673 12:47:09 -- host/discovery.sh@75 -- # notify_id=2 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:10.673 12:47:09 -- common/autotest_common.sh@904 -- # return 0 00:19:10.673 12:47:09 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:10.673 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.673 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 [2024-04-16 12:47:09.576761] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:10.673 [2024-04-16 12:47:09.577904] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:10.673 [2024-04-16 12:47:09.577953] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:10.673 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.673 12:47:09 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.673 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:10.673 12:47:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.673 12:47:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:10.673 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.673 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 12:47:09 -- host/discovery.sh@59 -- # sort 00:19:10.673 12:47:09 -- host/discovery.sh@59 -- # xargs 00:19:10.673 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.673 12:47:09 -- common/autotest_common.sh@904 -- # return 0 00:19:10.673 12:47:09 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.673 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:10.673 12:47:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.673 12:47:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:10.673 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.673 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 12:47:09 -- host/discovery.sh@55 -- # sort 00:19:10.673 12:47:09 -- host/discovery.sh@55 -- # xargs 00:19:10.673 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:10.673 12:47:09 -- common/autotest_common.sh@904 -- # return 0 00:19:10.673 12:47:09 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.673 12:47:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:10.673 12:47:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:10.673 12:47:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:10.673 12:47:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.673 12:47:09 -- host/discovery.sh@63 -- # sort -n 00:19:10.673 12:47:09 -- common/autotest_common.sh@10 -- # set +x 00:19:10.673 12:47:09 -- host/discovery.sh@63 -- # xargs 00:19:10.673 12:47:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.673 [2024-04-16 12:47:09.704345] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:19:10.673 12:47:09 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:10.673 12:47:09 -- common/autotest_common.sh@906 -- # sleep 1 00:19:11.239 [2024-04-16 12:47:10.006766] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:11.239 [2024-04-16 12:47:10.006791] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:11.239 [2024-04-16 12:47:10.006801] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:11.807 12:47:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:11.807 12:47:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:11.807 12:47:10 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:11.807 12:47:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:11.807 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.807 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:11.807 12:47:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:11.807 12:47:10 -- host/discovery.sh@63 -- # sort -n 00:19:11.807 12:47:10 -- host/discovery.sh@63 -- # xargs 00:19:11.807 12:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.807 12:47:10 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:11.807 12:47:10 -- common/autotest_common.sh@904 -- # return 0 00:19:11.807 12:47:10 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:11.808 12:47:10 -- host/discovery.sh@79 -- # expected_count=0 00:19:11.808 12:47:10 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:11.808 12:47:10 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:11.808 12:47:10 -- common/autotest_common.sh@901 -- # local max=10 00:19:11.808 12:47:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:11.808 12:47:10 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:11.808 12:47:10 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:11.808 12:47:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:11.808 12:47:10 -- host/discovery.sh@74 -- # jq '. | length' 00:19:11.808 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.808 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:11.808 12:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.808 12:47:10 -- host/discovery.sh@74 -- # notification_count=0 00:19:11.808 12:47:10 -- host/discovery.sh@75 -- # notify_id=2 00:19:11.808 12:47:10 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:11.808 12:47:10 -- common/autotest_common.sh@904 -- # return 0 00:19:11.808 12:47:10 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:11.808 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.808 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:11.808 [2024-04-16 12:47:10.801363] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:11.808 [2024-04-16 12:47:10.801407] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:11.808 [2024-04-16 12:47:10.802825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:11.808 [2024-04-16 12:47:10.802884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.808 [2024-04-16 12:47:10.802914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:11.808 [2024-04-16 12:47:10.802930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.808 [2024-04-16 12:47:10.802947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:11.808 [2024-04-16 12:47:10.802962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.808 [2024-04-16 12:47:10.802982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:11.808 [2024-04-16 12:47:10.802997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.808 [2024-04-16 12:47:10.803012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73790 is same with the state(5) to be set 00:19:11.808 12:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.808 12:47:10 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:11.808 12:47:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:11.808 12:47:10 -- common/autotest_common.sh@901 -- # local max=10 00:19:11.808 12:47:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:11.808 12:47:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:11.808 12:47:10 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:11.808 12:47:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:11.808 12:47:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:11.808 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.808 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:11.808 12:47:10 -- host/discovery.sh@59 -- # sort 00:19:11.808 12:47:10 -- host/discovery.sh@59 -- # xargs 00:19:11.808 [2024-04-16 12:47:10.812833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73790 (9): Bad file descriptor 00:19:11.808 12:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.808 [2024-04-16 12:47:10.822890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:11.808 [2024-04-16 12:47:10.823162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.823357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.823386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf73790 with addr=10.0.0.2, port=4420 00:19:11.808 [2024-04-16 12:47:10.823405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73790 is same with the state(5) to be set 00:19:11.808 [2024-04-16 12:47:10.823432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73790 (9): Bad file descriptor 00:19:11.808 [2024-04-16 12:47:10.823474] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:11.808 [2024-04-16 12:47:10.823496] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:11.808 [2024-04-16 12:47:10.823514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:11.808 [2024-04-16 12:47:10.823537] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.808 [2024-04-16 12:47:10.832974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:11.808 [2024-04-16 12:47:10.833182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.833334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.833362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf73790 with addr=10.0.0.2, port=4420 00:19:11.808 [2024-04-16 12:47:10.833381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73790 is same with the state(5) to be set 00:19:11.808 [2024-04-16 12:47:10.833406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73790 (9): Bad file descriptor 00:19:11.808 [2024-04-16 12:47:10.833429] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:11.808 [2024-04-16 12:47:10.833445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:11.808 [2024-04-16 12:47:10.833460] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:11.808 [2024-04-16 12:47:10.833481] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.808 [2024-04-16 12:47:10.843050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:11.808 [2024-04-16 12:47:10.843281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.843426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.843459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf73790 with addr=10.0.0.2, port=4420 00:19:11.808 [2024-04-16 12:47:10.843477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73790 is same with the state(5) to be set 00:19:11.808 [2024-04-16 12:47:10.843502] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73790 (9): Bad file descriptor 00:19:11.808 [2024-04-16 12:47:10.843526] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:11.808 [2024-04-16 12:47:10.843541] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:11.808 [2024-04-16 12:47:10.843556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:11.808 [2024-04-16 12:47:10.843612] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.808 12:47:10 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.808 12:47:10 -- common/autotest_common.sh@904 -- # return 0 00:19:11.808 12:47:10 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:11.808 12:47:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:11.808 12:47:10 -- common/autotest_common.sh@901 -- # local max=10 00:19:11.808 12:47:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:11.808 12:47:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:11.808 12:47:10 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:11.808 12:47:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:11.808 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.808 12:47:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:11.808 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:11.808 12:47:10 -- host/discovery.sh@55 -- # sort 00:19:11.808 12:47:10 -- host/discovery.sh@55 -- # xargs 00:19:11.808 [2024-04-16 12:47:10.853127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:11.808 [2024-04-16 12:47:10.853377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.853623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.853650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf73790 with addr=10.0.0.2, port=4420 00:19:11.808 [2024-04-16 12:47:10.853667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73790 is same with the state(5) to be set 00:19:11.808 [2024-04-16 12:47:10.853690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73790 (9): Bad file descriptor 00:19:11.808 [2024-04-16 12:47:10.853711] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:11.808 [2024-04-16 12:47:10.853725] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:11.808 [2024-04-16 12:47:10.853739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:11.808 [2024-04-16 12:47:10.853758] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.808 [2024-04-16 12:47:10.863208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:11.808 [2024-04-16 12:47:10.863408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.863634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.808 [2024-04-16 12:47:10.863661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf73790 with addr=10.0.0.2, port=4420 00:19:11.808 [2024-04-16 12:47:10.863678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73790 is same with the state(5) to be set 00:19:11.808 [2024-04-16 12:47:10.863701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73790 (9): Bad file descriptor 00:19:11.808 [2024-04-16 12:47:10.863723] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:11.808 [2024-04-16 12:47:10.863737] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:11.809 [2024-04-16 12:47:10.863751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:11.809 [2024-04-16 12:47:10.863771] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.809 [2024-04-16 12:47:10.873287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:11.809 [2024-04-16 12:47:10.873437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.809 [2024-04-16 12:47:10.873612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.809 [2024-04-16 12:47:10.873638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf73790 with addr=10.0.0.2, port=4420 00:19:11.809 [2024-04-16 12:47:10.873678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73790 is same with the state(5) to be set 00:19:11.809 [2024-04-16 12:47:10.873703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73790 (9): Bad file descriptor 00:19:11.809 [2024-04-16 12:47:10.873725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:11.809 [2024-04-16 12:47:10.873739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:11.809 [2024-04-16 12:47:10.873753] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:11.809 [2024-04-16 12:47:10.873772] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.809 12:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.067 [2024-04-16 12:47:10.883362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:12.067 [2024-04-16 12:47:10.883594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.067 [2024-04-16 12:47:10.883767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.067 [2024-04-16 12:47:10.883793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf73790 with addr=10.0.0.2, port=4420 00:19:12.067 [2024-04-16 12:47:10.883810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73790 is same with the state(5) to be set 00:19:12.067 [2024-04-16 12:47:10.883833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf73790 (9): Bad file descriptor 00:19:12.067 [2024-04-16 12:47:10.883873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:12.067 [2024-04-16 12:47:10.883887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:12.067 [2024-04-16 12:47:10.883900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:12.067 [2024-04-16 12:47:10.883934] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.067 [2024-04-16 12:47:10.889065] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:12.067 [2024-04-16 12:47:10.889099] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:12.067 12:47:10 -- common/autotest_common.sh@904 -- # return 0 00:19:12.067 12:47:10 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:12.067 12:47:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:12.067 12:47:10 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.067 12:47:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:12.067 12:47:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:12.067 12:47:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:12.067 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.067 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:12.067 12:47:10 -- host/discovery.sh@63 -- # sort -n 00:19:12.067 12:47:10 -- host/discovery.sh@63 -- # xargs 00:19:12.067 12:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:19:12.067 12:47:10 -- common/autotest_common.sh@904 -- # return 0 00:19:12.067 12:47:10 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:12.067 12:47:10 -- host/discovery.sh@79 -- # expected_count=0 00:19:12.067 12:47:10 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:12.067 12:47:10 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:12.067 12:47:10 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.067 12:47:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:12.067 12:47:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:12.067 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.067 12:47:10 -- host/discovery.sh@74 -- # jq '. | length' 00:19:12.067 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:12.067 12:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.067 12:47:10 -- host/discovery.sh@74 -- # notification_count=0 00:19:12.067 12:47:10 -- host/discovery.sh@75 -- # notify_id=2 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:12.067 12:47:10 -- common/autotest_common.sh@904 -- # return 0 00:19:12.067 12:47:10 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:12.067 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.067 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:12.067 12:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.067 12:47:10 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:12.067 12:47:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:12.067 12:47:10 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.067 12:47:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:12.067 12:47:10 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:12.067 12:47:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.067 12:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.067 12:47:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.067 12:47:10 -- common/autotest_common.sh@10 -- # set +x 00:19:12.067 12:47:10 -- host/discovery.sh@59 -- # sort 00:19:12.067 12:47:10 -- host/discovery.sh@59 -- # xargs 00:19:12.067 12:47:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.067 12:47:11 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:19:12.067 12:47:11 -- common/autotest_common.sh@904 -- # return 0 00:19:12.067 12:47:11 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:12.067 12:47:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:12.067 12:47:11 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.067 12:47:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.067 12:47:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:12.067 12:47:11 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:12.067 12:47:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.067 12:47:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.067 12:47:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.067 12:47:11 -- common/autotest_common.sh@10 -- # set +x 00:19:12.067 12:47:11 -- host/discovery.sh@55 -- # sort 00:19:12.067 12:47:11 -- host/discovery.sh@55 -- # xargs 00:19:12.067 12:47:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.067 12:47:11 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:19:12.067 12:47:11 -- common/autotest_common.sh@904 -- # return 0 00:19:12.067 12:47:11 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:12.067 12:47:11 -- host/discovery.sh@79 -- # expected_count=2 00:19:12.067 12:47:11 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:12.067 12:47:11 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:12.067 12:47:11 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.067 12:47:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.067 12:47:11 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:12.067 12:47:11 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:12.067 12:47:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:12.067 12:47:11 -- host/discovery.sh@74 -- # jq '. | length' 00:19:12.067 12:47:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.067 12:47:11 -- common/autotest_common.sh@10 -- # set +x 00:19:12.067 12:47:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.067 12:47:11 -- host/discovery.sh@74 -- # notification_count=2 00:19:12.067 12:47:11 -- host/discovery.sh@75 -- # notify_id=4 00:19:12.067 12:47:11 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:12.067 12:47:11 -- common/autotest_common.sh@904 -- # return 0 00:19:12.067 12:47:11 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:12.067 12:47:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.067 12:47:11 -- common/autotest_common.sh@10 -- # set +x 00:19:13.441 [2024-04-16 12:47:12.168763] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:13.441 [2024-04-16 12:47:12.168796] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:13.441 [2024-04-16 12:47:12.168819] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:13.441 [2024-04-16 12:47:12.255098] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:13.699 [2024-04-16 12:47:12.568339] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:13.699 [2024-04-16 12:47:12.568402] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:13.699 12:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.699 12:47:12 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:13.699 12:47:12 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.699 12:47:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:13.699 12:47:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:13.699 12:47:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.699 12:47:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:13.699 12:47:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.699 12:47:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:13.699 12:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.699 12:47:12 -- common/autotest_common.sh@10 -- # set +x 00:19:13.699 request: 00:19:13.699 { 00:19:13.699 "name": "nvme", 00:19:13.699 "trtype": "tcp", 00:19:13.699 "traddr": "10.0.0.2", 00:19:13.699 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:13.699 "adrfam": "ipv4", 00:19:13.699 "trsvcid": "8009", 00:19:13.699 "wait_for_attach": true, 00:19:13.699 "method": "bdev_nvme_start_discovery", 00:19:13.699 "req_id": 1 00:19:13.699 } 00:19:13.699 Got JSON-RPC error response 00:19:13.699 response: 00:19:13.699 { 00:19:13.699 "code": -17, 00:19:13.699 "message": "File exists" 00:19:13.699 } 00:19:13.699 12:47:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:13.699 12:47:12 -- common/autotest_common.sh@641 -- # es=1 00:19:13.699 12:47:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:13.699 12:47:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:13.699 12:47:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:13.699 12:47:12 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:13.699 12:47:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:13.699 12:47:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:13.700 12:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.700 12:47:12 -- common/autotest_common.sh@10 -- # set +x 00:19:13.700 12:47:12 -- host/discovery.sh@67 -- # sort 00:19:13.700 12:47:12 -- host/discovery.sh@67 -- # xargs 00:19:13.700 12:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.700 12:47:12 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:13.700 12:47:12 -- host/discovery.sh@146 -- # get_bdev_list 00:19:13.700 12:47:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.700 12:47:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:13.700 12:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.700 12:47:12 -- host/discovery.sh@55 -- # sort 00:19:13.700 12:47:12 -- common/autotest_common.sh@10 -- # set +x 00:19:13.700 12:47:12 -- host/discovery.sh@55 -- # xargs 00:19:13.700 12:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.700 12:47:12 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:13.700 12:47:12 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:13.700 12:47:12 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.700 12:47:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:13.700 12:47:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:13.700 12:47:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.700 12:47:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:13.700 12:47:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.700 12:47:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:13.700 12:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.700 12:47:12 -- common/autotest_common.sh@10 -- # set +x 00:19:13.700 request: 00:19:13.700 { 00:19:13.700 "name": "nvme_second", 00:19:13.700 "trtype": "tcp", 00:19:13.700 "traddr": "10.0.0.2", 00:19:13.700 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:13.700 "adrfam": "ipv4", 00:19:13.700 "trsvcid": "8009", 00:19:13.700 "wait_for_attach": true, 00:19:13.700 "method": "bdev_nvme_start_discovery", 00:19:13.700 "req_id": 1 00:19:13.700 } 00:19:13.700 Got JSON-RPC error response 00:19:13.700 response: 00:19:13.700 { 00:19:13.700 "code": -17, 00:19:13.700 "message": "File exists" 00:19:13.700 } 00:19:13.700 12:47:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:13.700 12:47:12 -- common/autotest_common.sh@641 -- # es=1 00:19:13.700 12:47:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:13.700 12:47:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:13.700 12:47:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:13.700 12:47:12 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:13.700 12:47:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:13.700 12:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.700 12:47:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:13.700 12:47:12 -- common/autotest_common.sh@10 -- # set +x 00:19:13.700 12:47:12 -- host/discovery.sh@67 -- # sort 00:19:13.700 12:47:12 -- host/discovery.sh@67 -- # xargs 00:19:13.700 12:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.700 12:47:12 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:13.700 12:47:12 -- host/discovery.sh@152 -- # get_bdev_list 00:19:13.700 12:47:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.700 12:47:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:13.700 12:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.700 12:47:12 -- common/autotest_common.sh@10 -- # set +x 00:19:13.700 12:47:12 -- host/discovery.sh@55 -- # sort 00:19:13.700 12:47:12 -- host/discovery.sh@55 -- # xargs 00:19:13.700 12:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.958 12:47:12 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:13.958 12:47:12 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:13.958 12:47:12 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.958 12:47:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:13.958 12:47:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:13.958 12:47:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.958 12:47:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:13.958 12:47:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.958 12:47:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:13.958 12:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.958 12:47:12 -- common/autotest_common.sh@10 -- # set +x 00:19:14.891 [2024-04-16 12:47:13.780377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.891 [2024-04-16 12:47:13.780689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.891 [2024-04-16 12:47:13.780717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfad190 with addr=10.0.0.2, port=8010 00:19:14.891 [2024-04-16 12:47:13.780747] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:14.891 [2024-04-16 12:47:13.780764] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:14.891 [2024-04-16 12:47:13.780777] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:15.824 [2024-04-16 12:47:14.782858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.824 [2024-04-16 12:47:14.783156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.824 [2024-04-16 12:47:14.783186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfad190 with addr=10.0.0.2, port=8010 00:19:15.824 [2024-04-16 12:47:14.783218] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:15.824 [2024-04-16 12:47:14.783245] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:15.824 [2024-04-16 12:47:14.783260] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:16.757 [2024-04-16 12:47:15.784859] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:16.757 request: 00:19:16.757 { 00:19:16.757 "name": "nvme_second", 00:19:16.757 "trtype": "tcp", 00:19:16.757 "traddr": "10.0.0.2", 00:19:16.757 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:16.757 "adrfam": "ipv4", 00:19:16.757 "trsvcid": "8010", 00:19:16.757 "attach_timeout_ms": 3000, 00:19:16.757 "method": "bdev_nvme_start_discovery", 00:19:16.757 "req_id": 1 00:19:16.757 } 00:19:16.757 Got JSON-RPC error response 00:19:16.757 response: 00:19:16.757 { 00:19:16.757 "code": -110, 00:19:16.757 "message": "Connection timed out" 00:19:16.757 } 00:19:16.757 12:47:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:16.757 12:47:15 -- common/autotest_common.sh@641 -- # es=1 00:19:16.757 12:47:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:16.757 12:47:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:16.757 12:47:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:16.757 12:47:15 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:16.757 12:47:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:16.757 12:47:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.757 12:47:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:16.757 12:47:15 -- common/autotest_common.sh@10 -- # set +x 00:19:16.757 12:47:15 -- host/discovery.sh@67 -- # sort 00:19:16.757 12:47:15 -- host/discovery.sh@67 -- # xargs 00:19:16.757 12:47:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.015 12:47:15 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:17.015 12:47:15 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:17.015 12:47:15 -- host/discovery.sh@161 -- # kill 1235834 00:19:17.015 12:47:15 -- host/discovery.sh@162 -- # nvmftestfini 00:19:17.015 12:47:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:17.015 12:47:15 -- nvmf/common.sh@117 -- # sync 00:19:17.015 12:47:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:17.015 12:47:15 -- nvmf/common.sh@120 -- # set +e 00:19:17.015 12:47:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.015 12:47:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:17.015 rmmod nvme_tcp 00:19:17.015 rmmod nvme_fabrics 00:19:17.015 rmmod nvme_keyring 00:19:17.015 12:47:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.015 12:47:15 -- nvmf/common.sh@124 -- # set -e 00:19:17.015 12:47:15 -- nvmf/common.sh@125 -- # return 0 00:19:17.015 12:47:15 -- nvmf/common.sh@478 -- # '[' -n 1235681 ']' 00:19:17.015 12:47:15 -- nvmf/common.sh@479 -- # killprocess 1235681 00:19:17.015 12:47:15 -- common/autotest_common.sh@936 -- # '[' -z 1235681 ']' 00:19:17.015 12:47:15 -- common/autotest_common.sh@940 -- # kill -0 1235681 00:19:17.015 12:47:15 -- common/autotest_common.sh@941 -- # uname 00:19:17.015 12:47:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.015 12:47:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1235681 00:19:17.015 12:47:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:17.015 12:47:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:17.015 12:47:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1235681' 00:19:17.015 killing process with pid 1235681 00:19:17.015 12:47:15 -- common/autotest_common.sh@955 -- # kill 1235681 00:19:17.015 12:47:15 -- common/autotest_common.sh@960 -- # wait 1235681 00:19:17.272 12:47:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:17.272 12:47:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:17.272 12:47:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:17.272 12:47:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.272 12:47:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.272 12:47:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.272 12:47:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.272 12:47:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.191 12:47:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.191 00:19:19.191 real 0m15.012s 00:19:19.191 user 0m21.895s 00:19:19.191 sys 0m3.102s 00:19:19.191 12:47:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:19.191 12:47:18 -- common/autotest_common.sh@10 -- # set +x 00:19:19.191 ************************************ 00:19:19.191 END TEST nvmf_discovery 00:19:19.191 ************************************ 00:19:19.450 12:47:18 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:19.450 12:47:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:19.450 12:47:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.450 12:47:18 -- common/autotest_common.sh@10 -- # set +x 00:19:19.450 ************************************ 00:19:19.450 START TEST nvmf_discovery_remove_ifc 00:19:19.450 ************************************ 00:19:19.450 12:47:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:19.450 * Looking for test storage... 00:19:19.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:19.450 12:47:18 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.450 12:47:18 -- nvmf/common.sh@7 -- # uname -s 00:19:19.450 12:47:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.450 12:47:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.450 12:47:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.450 12:47:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.450 12:47:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.450 12:47:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.450 12:47:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.450 12:47:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.450 12:47:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.450 12:47:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.450 12:47:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:19.450 12:47:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:19.450 12:47:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.450 12:47:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.450 12:47:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.450 12:47:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.450 12:47:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.450 12:47:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.450 12:47:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.450 12:47:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.450 12:47:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.450 12:47:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.450 12:47:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.450 12:47:18 -- paths/export.sh@5 -- # export PATH 00:19:19.450 12:47:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.450 12:47:18 -- nvmf/common.sh@47 -- # : 0 00:19:19.450 12:47:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.450 12:47:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.450 12:47:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.450 12:47:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.450 12:47:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.450 12:47:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.450 12:47:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.450 12:47:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.450 12:47:18 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:19.450 12:47:18 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:19.450 12:47:18 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:19.450 12:47:18 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:19.450 12:47:18 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:19.450 12:47:18 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:19.450 12:47:18 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:19.450 12:47:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:19.450 12:47:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.450 12:47:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:19.450 12:47:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:19.450 12:47:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:19.450 12:47:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.450 12:47:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.450 12:47:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.450 12:47:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:19.450 12:47:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:19.450 12:47:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.450 12:47:18 -- common/autotest_common.sh@10 -- # set +x 00:19:21.981 12:47:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:21.981 12:47:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:21.981 12:47:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:21.981 12:47:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:21.981 12:47:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:21.981 12:47:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:21.981 12:47:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:21.981 12:47:20 -- nvmf/common.sh@295 -- # net_devs=() 00:19:21.981 12:47:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:21.981 12:47:20 -- nvmf/common.sh@296 -- # e810=() 00:19:21.981 12:47:20 -- nvmf/common.sh@296 -- # local -ga e810 00:19:21.981 12:47:20 -- nvmf/common.sh@297 -- # x722=() 00:19:21.981 12:47:20 -- nvmf/common.sh@297 -- # local -ga x722 00:19:21.981 12:47:20 -- nvmf/common.sh@298 -- # mlx=() 00:19:21.981 12:47:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:21.981 12:47:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.981 12:47:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:21.981 12:47:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:21.981 12:47:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:21.981 12:47:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.981 12:47:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:21.981 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:21.981 12:47:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.981 12:47:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:21.981 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:21.981 12:47:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.981 12:47:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.982 12:47:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.982 12:47:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.982 12:47:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.982 12:47:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:21.982 12:47:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:21.982 12:47:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:21.982 12:47:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.982 12:47:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.982 12:47:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:21.982 12:47:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.982 12:47:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:21.982 Found net devices under 0000:82:00.0: cvl_0_0 00:19:21.982 12:47:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.982 12:47:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.982 12:47:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.982 12:47:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:21.982 12:47:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.982 12:47:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:21.982 Found net devices under 0000:82:00.1: cvl_0_1 00:19:21.982 12:47:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.982 12:47:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:21.982 12:47:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:21.982 12:47:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:21.982 12:47:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:21.982 12:47:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:21.982 12:47:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.982 12:47:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.982 12:47:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.982 12:47:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:21.982 12:47:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.982 12:47:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.982 12:47:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:21.982 12:47:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.982 12:47:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.982 12:47:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:21.982 12:47:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:21.982 12:47:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.982 12:47:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.982 12:47:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.982 12:47:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.982 12:47:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:21.982 12:47:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.982 12:47:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.240 12:47:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.240 12:47:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:19:22.240 00:19:22.240 --- 10.0.0.2 ping statistics --- 00:19:22.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.240 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:19:22.240 12:47:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:19:22.240 00:19:22.240 --- 10.0.0.1 ping statistics --- 00:19:22.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.240 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:19:22.240 12:47:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.240 12:47:21 -- nvmf/common.sh@411 -- # return 0 00:19:22.240 12:47:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:22.240 12:47:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.240 12:47:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:22.240 12:47:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:22.240 12:47:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.240 12:47:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:22.240 12:47:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:22.240 12:47:21 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:22.240 12:47:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:22.240 12:47:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:22.240 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:22.240 12:47:21 -- nvmf/common.sh@470 -- # nvmfpid=1239297 00:19:22.240 12:47:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.240 12:47:21 -- nvmf/common.sh@471 -- # waitforlisten 1239297 00:19:22.240 12:47:21 -- common/autotest_common.sh@817 -- # '[' -z 1239297 ']' 00:19:22.240 12:47:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.240 12:47:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.240 12:47:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.240 12:47:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.240 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:22.240 [2024-04-16 12:47:21.134642] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:19:22.240 [2024-04-16 12:47:21.134725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.240 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.240 [2024-04-16 12:47:21.206740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.499 [2024-04-16 12:47:21.310121] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.499 [2024-04-16 12:47:21.310176] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.499 [2024-04-16 12:47:21.310191] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.499 [2024-04-16 12:47:21.310211] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.499 [2024-04-16 12:47:21.310222] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.499 [2024-04-16 12:47:21.310267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.499 12:47:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.499 12:47:21 -- common/autotest_common.sh@850 -- # return 0 00:19:22.499 12:47:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:22.499 12:47:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.499 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:22.499 12:47:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.499 12:47:21 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:22.499 12:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.499 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:22.499 [2024-04-16 12:47:21.464946] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.499 [2024-04-16 12:47:21.473147] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:22.499 null0 00:19:22.499 [2024-04-16 12:47:21.505056] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.499 12:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.499 12:47:21 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1239438 00:19:22.499 12:47:21 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:22.499 12:47:21 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1239438 /tmp/host.sock 00:19:22.499 12:47:21 -- common/autotest_common.sh@817 -- # '[' -z 1239438 ']' 00:19:22.499 12:47:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:22.499 12:47:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.499 12:47:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:22.499 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:22.499 12:47:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.499 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:22.758 [2024-04-16 12:47:21.570739] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:19:22.758 [2024-04-16 12:47:21.570813] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239438 ] 00:19:22.758 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.758 [2024-04-16 12:47:21.641808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.758 [2024-04-16 12:47:21.756425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.758 12:47:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.758 12:47:21 -- common/autotest_common.sh@850 -- # return 0 00:19:22.758 12:47:21 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.758 12:47:21 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:22.758 12:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.758 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:22.758 12:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.758 12:47:21 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:22.758 12:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.758 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:23.016 12:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.016 12:47:21 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:23.016 12:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.016 12:47:21 -- common/autotest_common.sh@10 -- # set +x 00:19:23.949 [2024-04-16 12:47:22.972746] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:23.949 [2024-04-16 12:47:22.972786] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:23.949 [2024-04-16 12:47:22.972810] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:24.207 [2024-04-16 12:47:23.060097] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:24.207 [2024-04-16 12:47:23.166118] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:24.207 [2024-04-16 12:47:23.166182] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:24.207 [2024-04-16 12:47:23.166226] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:24.207 [2024-04-16 12:47:23.166253] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:24.207 [2024-04-16 12:47:23.166295] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:24.207 12:47:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:24.207 12:47:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.207 12:47:23 -- common/autotest_common.sh@10 -- # set +x 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:24.207 [2024-04-16 12:47:23.171249] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24ba080 was disconnected and freed. delete nvme_qpair. 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:24.207 12:47:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:24.207 12:47:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.207 12:47:23 -- common/autotest_common.sh@10 -- # set +x 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:24.207 12:47:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:24.465 12:47:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.465 12:47:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:24.465 12:47:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:25.398 12:47:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:25.398 12:47:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.398 12:47:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:25.398 12:47:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.398 12:47:24 -- common/autotest_common.sh@10 -- # set +x 00:19:25.398 12:47:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:25.398 12:47:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:25.398 12:47:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.398 12:47:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:25.398 12:47:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:26.332 12:47:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.332 12:47:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.332 12:47:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.332 12:47:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.332 12:47:25 -- common/autotest_common.sh@10 -- # set +x 00:19:26.332 12:47:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.332 12:47:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.332 12:47:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.590 12:47:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:26.590 12:47:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:27.524 12:47:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:27.524 12:47:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:27.524 12:47:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:27.524 12:47:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.524 12:47:26 -- common/autotest_common.sh@10 -- # set +x 00:19:27.524 12:47:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:27.524 12:47:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:27.524 12:47:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.524 12:47:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:27.524 12:47:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:28.458 12:47:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:28.458 12:47:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:28.458 12:47:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:28.458 12:47:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.458 12:47:27 -- common/autotest_common.sh@10 -- # set +x 00:19:28.458 12:47:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:28.458 12:47:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:28.458 12:47:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.458 12:47:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:28.458 12:47:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:29.838 12:47:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.838 12:47:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.838 12:47:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.838 12:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.838 12:47:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.838 12:47:28 -- common/autotest_common.sh@10 -- # set +x 00:19:29.838 12:47:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.838 12:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.838 12:47:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:29.838 12:47:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:29.838 [2024-04-16 12:47:28.607228] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:29.838 [2024-04-16 12:47:28.607296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.838 [2024-04-16 12:47:28.607327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.838 [2024-04-16 12:47:28.607348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.838 [2024-04-16 12:47:28.607364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.838 [2024-04-16 12:47:28.607379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.838 [2024-04-16 12:47:28.607395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.838 [2024-04-16 12:47:28.607413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.839 [2024-04-16 12:47:28.607428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.839 [2024-04-16 12:47:28.607445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.839 [2024-04-16 12:47:28.607460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.839 [2024-04-16 12:47:28.607475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2480510 is same with the state(5) to be set 00:19:29.839 [2024-04-16 12:47:28.617248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2480510 (9): Bad file descriptor 00:19:29.839 [2024-04-16 12:47:28.627294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:30.773 12:47:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.773 12:47:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.773 12:47:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.773 12:47:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.773 12:47:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.773 12:47:29 -- common/autotest_common.sh@10 -- # set +x 00:19:30.773 12:47:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.773 [2024-04-16 12:47:29.689604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:31.706 [2024-04-16 12:47:30.713615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:31.706 [2024-04-16 12:47:30.713694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2480510 with addr=10.0.0.2, port=4420 00:19:31.706 [2024-04-16 12:47:30.713721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2480510 is same with the state(5) to be set 00:19:31.706 [2024-04-16 12:47:30.714230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2480510 (9): Bad file descriptor 00:19:31.706 [2024-04-16 12:47:30.714279] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:31.706 [2024-04-16 12:47:30.714328] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:31.706 [2024-04-16 12:47:30.714370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.706 [2024-04-16 12:47:30.714393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.706 [2024-04-16 12:47:30.714414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.706 [2024-04-16 12:47:30.714430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.707 [2024-04-16 12:47:30.714445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.707 [2024-04-16 12:47:30.714461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.707 [2024-04-16 12:47:30.714477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.707 [2024-04-16 12:47:30.714493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.707 [2024-04-16 12:47:30.714510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.707 [2024-04-16 12:47:30.714525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.707 [2024-04-16 12:47:30.714540] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:31.707 [2024-04-16 12:47:30.714759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2480920 (9): Bad file descriptor 00:19:31.707 [2024-04-16 12:47:30.715777] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:31.707 [2024-04-16 12:47:30.715801] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:31.707 12:47:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.707 12:47:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:31.707 12:47:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.079 12:47:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.079 12:47:31 -- common/autotest_common.sh@10 -- # set +x 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.079 12:47:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.079 12:47:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.079 12:47:31 -- common/autotest_common.sh@10 -- # set +x 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.079 12:47:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:33.079 12:47:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.013 [2024-04-16 12:47:32.728939] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:34.013 [2024-04-16 12:47:32.728972] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:34.013 [2024-04-16 12:47:32.728998] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:34.013 [2024-04-16 12:47:32.856409] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:34.013 12:47:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.013 12:47:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.013 12:47:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.013 12:47:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.013 12:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.013 12:47:32 -- common/autotest_common.sh@10 -- # set +x 00:19:34.013 12:47:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.013 12:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.013 12:47:32 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:34.013 12:47:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.013 [2024-04-16 12:47:33.040819] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:34.013 [2024-04-16 12:47:33.040886] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:34.013 [2024-04-16 12:47:33.040919] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:34.013 [2024-04-16 12:47:33.040957] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:34.013 [2024-04-16 12:47:33.040972] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:34.013 [2024-04-16 12:47:33.047509] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x248e280 was disconnected and freed. delete nvme_qpair. 00:19:34.948 12:47:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.948 12:47:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.948 12:47:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.948 12:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.948 12:47:33 -- common/autotest_common.sh@10 -- # set +x 00:19:34.948 12:47:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.948 12:47:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.948 12:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.948 12:47:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:34.948 12:47:33 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:34.948 12:47:33 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1239438 00:19:34.948 12:47:33 -- common/autotest_common.sh@936 -- # '[' -z 1239438 ']' 00:19:34.948 12:47:33 -- common/autotest_common.sh@940 -- # kill -0 1239438 00:19:34.948 12:47:33 -- common/autotest_common.sh@941 -- # uname 00:19:34.948 12:47:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:34.948 12:47:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1239438 00:19:34.948 12:47:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:34.948 12:47:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:34.948 12:47:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1239438' 00:19:34.948 killing process with pid 1239438 00:19:34.948 12:47:33 -- common/autotest_common.sh@955 -- # kill 1239438 00:19:34.948 12:47:33 -- common/autotest_common.sh@960 -- # wait 1239438 00:19:35.206 12:47:34 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:35.206 12:47:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:35.206 12:47:34 -- nvmf/common.sh@117 -- # sync 00:19:35.206 12:47:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:35.206 12:47:34 -- nvmf/common.sh@120 -- # set +e 00:19:35.206 12:47:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:35.206 12:47:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:35.206 rmmod nvme_tcp 00:19:35.206 rmmod nvme_fabrics 00:19:35.463 rmmod nvme_keyring 00:19:35.463 12:47:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:35.463 12:47:34 -- nvmf/common.sh@124 -- # set -e 00:19:35.463 12:47:34 -- nvmf/common.sh@125 -- # return 0 00:19:35.463 12:47:34 -- nvmf/common.sh@478 -- # '[' -n 1239297 ']' 00:19:35.463 12:47:34 -- nvmf/common.sh@479 -- # killprocess 1239297 00:19:35.463 12:47:34 -- common/autotest_common.sh@936 -- # '[' -z 1239297 ']' 00:19:35.463 12:47:34 -- common/autotest_common.sh@940 -- # kill -0 1239297 00:19:35.463 12:47:34 -- common/autotest_common.sh@941 -- # uname 00:19:35.463 12:47:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.463 12:47:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1239297 00:19:35.463 12:47:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:35.463 12:47:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:35.463 12:47:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1239297' 00:19:35.463 killing process with pid 1239297 00:19:35.463 12:47:34 -- common/autotest_common.sh@955 -- # kill 1239297 00:19:35.463 12:47:34 -- common/autotest_common.sh@960 -- # wait 1239297 00:19:35.723 12:47:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:35.723 12:47:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:35.723 12:47:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:35.723 12:47:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:35.723 12:47:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:35.723 12:47:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.723 12:47:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.723 12:47:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.636 12:47:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:37.636 00:19:37.636 real 0m18.277s 00:19:37.636 user 0m24.809s 00:19:37.636 sys 0m3.439s 00:19:37.636 12:47:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:37.636 12:47:36 -- common/autotest_common.sh@10 -- # set +x 00:19:37.636 ************************************ 00:19:37.636 END TEST nvmf_discovery_remove_ifc 00:19:37.636 ************************************ 00:19:37.636 12:47:36 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:37.636 12:47:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:37.636 12:47:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:37.636 12:47:36 -- common/autotest_common.sh@10 -- # set +x 00:19:37.894 ************************************ 00:19:37.894 START TEST nvmf_identify_kernel_target 00:19:37.894 ************************************ 00:19:37.894 12:47:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:37.894 * Looking for test storage... 00:19:37.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:37.894 12:47:36 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.894 12:47:36 -- nvmf/common.sh@7 -- # uname -s 00:19:37.894 12:47:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.894 12:47:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.894 12:47:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.894 12:47:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.894 12:47:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.894 12:47:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.894 12:47:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.894 12:47:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.894 12:47:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.894 12:47:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.894 12:47:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:37.894 12:47:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:37.894 12:47:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.894 12:47:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.894 12:47:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.894 12:47:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.894 12:47:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.894 12:47:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.894 12:47:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.894 12:47:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.894 12:47:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.894 12:47:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.894 12:47:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.894 12:47:36 -- paths/export.sh@5 -- # export PATH 00:19:37.895 12:47:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.895 12:47:36 -- nvmf/common.sh@47 -- # : 0 00:19:37.895 12:47:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.895 12:47:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.895 12:47:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.895 12:47:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.895 12:47:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.895 12:47:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.895 12:47:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.895 12:47:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.895 12:47:36 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:37.895 12:47:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:37.895 12:47:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.895 12:47:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:37.895 12:47:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:37.895 12:47:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:37.895 12:47:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.895 12:47:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.895 12:47:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.895 12:47:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:37.895 12:47:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:37.895 12:47:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:37.895 12:47:36 -- common/autotest_common.sh@10 -- # set +x 00:19:40.434 12:47:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:40.434 12:47:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.434 12:47:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.434 12:47:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.434 12:47:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.434 12:47:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.434 12:47:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.434 12:47:39 -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.434 12:47:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.434 12:47:39 -- nvmf/common.sh@296 -- # e810=() 00:19:40.434 12:47:39 -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.434 12:47:39 -- nvmf/common.sh@297 -- # x722=() 00:19:40.434 12:47:39 -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.434 12:47:39 -- nvmf/common.sh@298 -- # mlx=() 00:19:40.434 12:47:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.434 12:47:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.434 12:47:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.434 12:47:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.434 12:47:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.434 12:47:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.434 12:47:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:40.434 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:40.434 12:47:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.434 12:47:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:40.434 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:40.434 12:47:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.434 12:47:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.434 12:47:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.434 12:47:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:40.434 12:47:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.434 12:47:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:40.434 Found net devices under 0000:82:00.0: cvl_0_0 00:19:40.434 12:47:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.434 12:47:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.434 12:47:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.434 12:47:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:40.434 12:47:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.434 12:47:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:40.434 Found net devices under 0000:82:00.1: cvl_0_1 00:19:40.434 12:47:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.434 12:47:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:40.434 12:47:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:40.434 12:47:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:40.434 12:47:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:40.434 12:47:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.434 12:47:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.434 12:47:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.434 12:47:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:40.434 12:47:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.434 12:47:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.434 12:47:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:40.434 12:47:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.434 12:47:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.434 12:47:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:40.434 12:47:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:40.434 12:47:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.434 12:47:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.434 12:47:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.434 12:47:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.434 12:47:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:40.434 12:47:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.434 12:47:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.434 12:47:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.434 12:47:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:40.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:19:40.434 00:19:40.434 --- 10.0.0.2 ping statistics --- 00:19:40.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.434 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:40.434 12:47:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:19:40.435 00:19:40.435 --- 10.0.0.1 ping statistics --- 00:19:40.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.435 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:19:40.435 12:47:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.435 12:47:39 -- nvmf/common.sh@411 -- # return 0 00:19:40.435 12:47:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:40.435 12:47:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.435 12:47:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:40.435 12:47:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:40.435 12:47:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.435 12:47:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:40.435 12:47:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:40.435 12:47:39 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:40.435 12:47:39 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:40.435 12:47:39 -- nvmf/common.sh@717 -- # local ip 00:19:40.435 12:47:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:40.435 12:47:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:40.435 12:47:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.435 12:47:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.435 12:47:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:40.435 12:47:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.435 12:47:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:40.435 12:47:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:40.435 12:47:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:40.435 12:47:39 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:40.435 12:47:39 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:40.435 12:47:39 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:40.435 12:47:39 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:40.435 12:47:39 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:40.435 12:47:39 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:40.435 12:47:39 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:40.435 12:47:39 -- nvmf/common.sh@628 -- # local block nvme 00:19:40.435 12:47:39 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:40.435 12:47:39 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:40.693 12:47:39 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:40.693 12:47:39 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:42.068 Waiting for block devices as requested 00:19:42.069 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:19:42.069 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:42.069 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:42.328 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:42.328 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:42.328 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:42.328 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:42.587 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:42.587 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:42.587 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:42.588 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:42.846 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:42.846 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:42.846 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:42.846 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:43.104 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:43.104 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:43.104 12:47:42 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:43.104 12:47:42 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:43.104 12:47:42 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:43.104 12:47:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:43.104 12:47:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:43.104 12:47:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:43.104 12:47:42 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:43.104 12:47:42 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:43.104 12:47:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:43.363 No valid GPT data, bailing 00:19:43.363 12:47:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:43.363 12:47:42 -- scripts/common.sh@391 -- # pt= 00:19:43.363 12:47:42 -- scripts/common.sh@392 -- # return 1 00:19:43.363 12:47:42 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:43.363 12:47:42 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:43.363 12:47:42 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:43.363 12:47:42 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:43.363 12:47:42 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:43.363 12:47:42 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:43.363 12:47:42 -- nvmf/common.sh@656 -- # echo 1 00:19:43.363 12:47:42 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:43.363 12:47:42 -- nvmf/common.sh@658 -- # echo 1 00:19:43.363 12:47:42 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:19:43.363 12:47:42 -- nvmf/common.sh@661 -- # echo tcp 00:19:43.363 12:47:42 -- nvmf/common.sh@662 -- # echo 4420 00:19:43.363 12:47:42 -- nvmf/common.sh@663 -- # echo ipv4 00:19:43.363 12:47:42 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:43.363 12:47:42 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:19:43.363 00:19:43.363 Discovery Log Number of Records 2, Generation counter 2 00:19:43.363 =====Discovery Log Entry 0====== 00:19:43.363 trtype: tcp 00:19:43.363 adrfam: ipv4 00:19:43.363 subtype: current discovery subsystem 00:19:43.363 treq: not specified, sq flow control disable supported 00:19:43.363 portid: 1 00:19:43.363 trsvcid: 4420 00:19:43.363 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:43.363 traddr: 10.0.0.1 00:19:43.363 eflags: none 00:19:43.363 sectype: none 00:19:43.363 =====Discovery Log Entry 1====== 00:19:43.363 trtype: tcp 00:19:43.363 adrfam: ipv4 00:19:43.363 subtype: nvme subsystem 00:19:43.363 treq: not specified, sq flow control disable supported 00:19:43.363 portid: 1 00:19:43.363 trsvcid: 4420 00:19:43.363 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:43.363 traddr: 10.0.0.1 00:19:43.363 eflags: none 00:19:43.363 sectype: none 00:19:43.363 12:47:42 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:43.363 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:43.363 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.363 ===================================================== 00:19:43.363 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:43.363 ===================================================== 00:19:43.363 Controller Capabilities/Features 00:19:43.363 ================================ 00:19:43.363 Vendor ID: 0000 00:19:43.363 Subsystem Vendor ID: 0000 00:19:43.363 Serial Number: 4c3c0510f37fac32add4 00:19:43.363 Model Number: Linux 00:19:43.363 Firmware Version: 6.7.0-68 00:19:43.363 Recommended Arb Burst: 0 00:19:43.363 IEEE OUI Identifier: 00 00 00 00:19:43.363 Multi-path I/O 00:19:43.363 May have multiple subsystem ports: No 00:19:43.363 May have multiple controllers: No 00:19:43.363 Associated with SR-IOV VF: No 00:19:43.363 Max Data Transfer Size: Unlimited 00:19:43.363 Max Number of Namespaces: 0 00:19:43.363 Max Number of I/O Queues: 1024 00:19:43.363 NVMe Specification Version (VS): 1.3 00:19:43.363 NVMe Specification Version (Identify): 1.3 00:19:43.363 Maximum Queue Entries: 1024 00:19:43.363 Contiguous Queues Required: No 00:19:43.363 Arbitration Mechanisms Supported 00:19:43.363 Weighted Round Robin: Not Supported 00:19:43.363 Vendor Specific: Not Supported 00:19:43.363 Reset Timeout: 7500 ms 00:19:43.363 Doorbell Stride: 4 bytes 00:19:43.363 NVM Subsystem Reset: Not Supported 00:19:43.363 Command Sets Supported 00:19:43.363 NVM Command Set: Supported 00:19:43.363 Boot Partition: Not Supported 00:19:43.363 Memory Page Size Minimum: 4096 bytes 00:19:43.363 Memory Page Size Maximum: 4096 bytes 00:19:43.363 Persistent Memory Region: Not Supported 00:19:43.363 Optional Asynchronous Events Supported 00:19:43.363 Namespace Attribute Notices: Not Supported 00:19:43.363 Firmware Activation Notices: Not Supported 00:19:43.363 ANA Change Notices: Not Supported 00:19:43.363 PLE Aggregate Log Change Notices: Not Supported 00:19:43.363 LBA Status Info Alert Notices: Not Supported 00:19:43.363 EGE Aggregate Log Change Notices: Not Supported 00:19:43.363 Normal NVM Subsystem Shutdown event: Not Supported 00:19:43.363 Zone Descriptor Change Notices: Not Supported 00:19:43.363 Discovery Log Change Notices: Supported 00:19:43.363 Controller Attributes 00:19:43.363 128-bit Host Identifier: Not Supported 00:19:43.363 Non-Operational Permissive Mode: Not Supported 00:19:43.363 NVM Sets: Not Supported 00:19:43.363 Read Recovery Levels: Not Supported 00:19:43.363 Endurance Groups: Not Supported 00:19:43.363 Predictable Latency Mode: Not Supported 00:19:43.363 Traffic Based Keep ALive: Not Supported 00:19:43.363 Namespace Granularity: Not Supported 00:19:43.363 SQ Associations: Not Supported 00:19:43.363 UUID List: Not Supported 00:19:43.363 Multi-Domain Subsystem: Not Supported 00:19:43.363 Fixed Capacity Management: Not Supported 00:19:43.363 Variable Capacity Management: Not Supported 00:19:43.363 Delete Endurance Group: Not Supported 00:19:43.363 Delete NVM Set: Not Supported 00:19:43.363 Extended LBA Formats Supported: Not Supported 00:19:43.363 Flexible Data Placement Supported: Not Supported 00:19:43.363 00:19:43.363 Controller Memory Buffer Support 00:19:43.363 ================================ 00:19:43.363 Supported: No 00:19:43.363 00:19:43.363 Persistent Memory Region Support 00:19:43.363 ================================ 00:19:43.363 Supported: No 00:19:43.363 00:19:43.363 Admin Command Set Attributes 00:19:43.363 ============================ 00:19:43.363 Security Send/Receive: Not Supported 00:19:43.363 Format NVM: Not Supported 00:19:43.363 Firmware Activate/Download: Not Supported 00:19:43.363 Namespace Management: Not Supported 00:19:43.363 Device Self-Test: Not Supported 00:19:43.363 Directives: Not Supported 00:19:43.363 NVMe-MI: Not Supported 00:19:43.363 Virtualization Management: Not Supported 00:19:43.363 Doorbell Buffer Config: Not Supported 00:19:43.363 Get LBA Status Capability: Not Supported 00:19:43.363 Command & Feature Lockdown Capability: Not Supported 00:19:43.363 Abort Command Limit: 1 00:19:43.363 Async Event Request Limit: 1 00:19:43.363 Number of Firmware Slots: N/A 00:19:43.363 Firmware Slot 1 Read-Only: N/A 00:19:43.363 Firmware Activation Without Reset: N/A 00:19:43.363 Multiple Update Detection Support: N/A 00:19:43.363 Firmware Update Granularity: No Information Provided 00:19:43.363 Per-Namespace SMART Log: No 00:19:43.363 Asymmetric Namespace Access Log Page: Not Supported 00:19:43.363 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:43.363 Command Effects Log Page: Not Supported 00:19:43.363 Get Log Page Extended Data: Supported 00:19:43.363 Telemetry Log Pages: Not Supported 00:19:43.364 Persistent Event Log Pages: Not Supported 00:19:43.364 Supported Log Pages Log Page: May Support 00:19:43.364 Commands Supported & Effects Log Page: Not Supported 00:19:43.364 Feature Identifiers & Effects Log Page:May Support 00:19:43.364 NVMe-MI Commands & Effects Log Page: May Support 00:19:43.364 Data Area 4 for Telemetry Log: Not Supported 00:19:43.364 Error Log Page Entries Supported: 1 00:19:43.364 Keep Alive: Not Supported 00:19:43.364 00:19:43.364 NVM Command Set Attributes 00:19:43.364 ========================== 00:19:43.364 Submission Queue Entry Size 00:19:43.364 Max: 1 00:19:43.364 Min: 1 00:19:43.364 Completion Queue Entry Size 00:19:43.364 Max: 1 00:19:43.364 Min: 1 00:19:43.364 Number of Namespaces: 0 00:19:43.364 Compare Command: Not Supported 00:19:43.364 Write Uncorrectable Command: Not Supported 00:19:43.364 Dataset Management Command: Not Supported 00:19:43.364 Write Zeroes Command: Not Supported 00:19:43.364 Set Features Save Field: Not Supported 00:19:43.364 Reservations: Not Supported 00:19:43.364 Timestamp: Not Supported 00:19:43.364 Copy: Not Supported 00:19:43.364 Volatile Write Cache: Not Present 00:19:43.364 Atomic Write Unit (Normal): 1 00:19:43.364 Atomic Write Unit (PFail): 1 00:19:43.364 Atomic Compare & Write Unit: 1 00:19:43.364 Fused Compare & Write: Not Supported 00:19:43.364 Scatter-Gather List 00:19:43.364 SGL Command Set: Supported 00:19:43.364 SGL Keyed: Not Supported 00:19:43.364 SGL Bit Bucket Descriptor: Not Supported 00:19:43.364 SGL Metadata Pointer: Not Supported 00:19:43.364 Oversized SGL: Not Supported 00:19:43.364 SGL Metadata Address: Not Supported 00:19:43.364 SGL Offset: Supported 00:19:43.364 Transport SGL Data Block: Not Supported 00:19:43.364 Replay Protected Memory Block: Not Supported 00:19:43.364 00:19:43.364 Firmware Slot Information 00:19:43.364 ========================= 00:19:43.364 Active slot: 0 00:19:43.364 00:19:43.364 00:19:43.364 Error Log 00:19:43.364 ========= 00:19:43.364 00:19:43.364 Active Namespaces 00:19:43.364 ================= 00:19:43.364 Discovery Log Page 00:19:43.364 ================== 00:19:43.364 Generation Counter: 2 00:19:43.364 Number of Records: 2 00:19:43.364 Record Format: 0 00:19:43.364 00:19:43.364 Discovery Log Entry 0 00:19:43.364 ---------------------- 00:19:43.364 Transport Type: 3 (TCP) 00:19:43.364 Address Family: 1 (IPv4) 00:19:43.364 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:43.364 Entry Flags: 00:19:43.364 Duplicate Returned Information: 0 00:19:43.364 Explicit Persistent Connection Support for Discovery: 0 00:19:43.364 Transport Requirements: 00:19:43.364 Secure Channel: Not Specified 00:19:43.364 Port ID: 1 (0x0001) 00:19:43.364 Controller ID: 65535 (0xffff) 00:19:43.364 Admin Max SQ Size: 32 00:19:43.364 Transport Service Identifier: 4420 00:19:43.364 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:43.364 Transport Address: 10.0.0.1 00:19:43.364 Discovery Log Entry 1 00:19:43.364 ---------------------- 00:19:43.364 Transport Type: 3 (TCP) 00:19:43.364 Address Family: 1 (IPv4) 00:19:43.364 Subsystem Type: 2 (NVM Subsystem) 00:19:43.364 Entry Flags: 00:19:43.364 Duplicate Returned Information: 0 00:19:43.364 Explicit Persistent Connection Support for Discovery: 0 00:19:43.364 Transport Requirements: 00:19:43.364 Secure Channel: Not Specified 00:19:43.364 Port ID: 1 (0x0001) 00:19:43.364 Controller ID: 65535 (0xffff) 00:19:43.364 Admin Max SQ Size: 32 00:19:43.364 Transport Service Identifier: 4420 00:19:43.364 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:43.364 Transport Address: 10.0.0.1 00:19:43.364 12:47:42 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:43.623 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.623 get_feature(0x01) failed 00:19:43.623 get_feature(0x02) failed 00:19:43.623 get_feature(0x04) failed 00:19:43.623 ===================================================== 00:19:43.623 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:43.623 ===================================================== 00:19:43.623 Controller Capabilities/Features 00:19:43.623 ================================ 00:19:43.623 Vendor ID: 0000 00:19:43.623 Subsystem Vendor ID: 0000 00:19:43.623 Serial Number: a36183c5a57c0abe37a0 00:19:43.623 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:43.623 Firmware Version: 6.7.0-68 00:19:43.623 Recommended Arb Burst: 6 00:19:43.623 IEEE OUI Identifier: 00 00 00 00:19:43.623 Multi-path I/O 00:19:43.623 May have multiple subsystem ports: Yes 00:19:43.623 May have multiple controllers: Yes 00:19:43.623 Associated with SR-IOV VF: No 00:19:43.623 Max Data Transfer Size: Unlimited 00:19:43.623 Max Number of Namespaces: 1024 00:19:43.623 Max Number of I/O Queues: 128 00:19:43.623 NVMe Specification Version (VS): 1.3 00:19:43.623 NVMe Specification Version (Identify): 1.3 00:19:43.623 Maximum Queue Entries: 1024 00:19:43.623 Contiguous Queues Required: No 00:19:43.623 Arbitration Mechanisms Supported 00:19:43.623 Weighted Round Robin: Not Supported 00:19:43.623 Vendor Specific: Not Supported 00:19:43.623 Reset Timeout: 7500 ms 00:19:43.623 Doorbell Stride: 4 bytes 00:19:43.623 NVM Subsystem Reset: Not Supported 00:19:43.623 Command Sets Supported 00:19:43.623 NVM Command Set: Supported 00:19:43.623 Boot Partition: Not Supported 00:19:43.623 Memory Page Size Minimum: 4096 bytes 00:19:43.623 Memory Page Size Maximum: 4096 bytes 00:19:43.623 Persistent Memory Region: Not Supported 00:19:43.623 Optional Asynchronous Events Supported 00:19:43.624 Namespace Attribute Notices: Supported 00:19:43.624 Firmware Activation Notices: Not Supported 00:19:43.624 ANA Change Notices: Supported 00:19:43.624 PLE Aggregate Log Change Notices: Not Supported 00:19:43.624 LBA Status Info Alert Notices: Not Supported 00:19:43.624 EGE Aggregate Log Change Notices: Not Supported 00:19:43.624 Normal NVM Subsystem Shutdown event: Not Supported 00:19:43.624 Zone Descriptor Change Notices: Not Supported 00:19:43.624 Discovery Log Change Notices: Not Supported 00:19:43.624 Controller Attributes 00:19:43.624 128-bit Host Identifier: Supported 00:19:43.624 Non-Operational Permissive Mode: Not Supported 00:19:43.624 NVM Sets: Not Supported 00:19:43.624 Read Recovery Levels: Not Supported 00:19:43.624 Endurance Groups: Not Supported 00:19:43.624 Predictable Latency Mode: Not Supported 00:19:43.624 Traffic Based Keep ALive: Supported 00:19:43.624 Namespace Granularity: Not Supported 00:19:43.624 SQ Associations: Not Supported 00:19:43.624 UUID List: Not Supported 00:19:43.624 Multi-Domain Subsystem: Not Supported 00:19:43.624 Fixed Capacity Management: Not Supported 00:19:43.624 Variable Capacity Management: Not Supported 00:19:43.624 Delete Endurance Group: Not Supported 00:19:43.624 Delete NVM Set: Not Supported 00:19:43.624 Extended LBA Formats Supported: Not Supported 00:19:43.624 Flexible Data Placement Supported: Not Supported 00:19:43.624 00:19:43.624 Controller Memory Buffer Support 00:19:43.624 ================================ 00:19:43.624 Supported: No 00:19:43.624 00:19:43.624 Persistent Memory Region Support 00:19:43.624 ================================ 00:19:43.624 Supported: No 00:19:43.624 00:19:43.624 Admin Command Set Attributes 00:19:43.624 ============================ 00:19:43.624 Security Send/Receive: Not Supported 00:19:43.624 Format NVM: Not Supported 00:19:43.624 Firmware Activate/Download: Not Supported 00:19:43.624 Namespace Management: Not Supported 00:19:43.624 Device Self-Test: Not Supported 00:19:43.624 Directives: Not Supported 00:19:43.624 NVMe-MI: Not Supported 00:19:43.624 Virtualization Management: Not Supported 00:19:43.624 Doorbell Buffer Config: Not Supported 00:19:43.624 Get LBA Status Capability: Not Supported 00:19:43.624 Command & Feature Lockdown Capability: Not Supported 00:19:43.624 Abort Command Limit: 4 00:19:43.624 Async Event Request Limit: 4 00:19:43.624 Number of Firmware Slots: N/A 00:19:43.624 Firmware Slot 1 Read-Only: N/A 00:19:43.624 Firmware Activation Without Reset: N/A 00:19:43.624 Multiple Update Detection Support: N/A 00:19:43.624 Firmware Update Granularity: No Information Provided 00:19:43.624 Per-Namespace SMART Log: Yes 00:19:43.624 Asymmetric Namespace Access Log Page: Supported 00:19:43.624 ANA Transition Time : 10 sec 00:19:43.624 00:19:43.624 Asymmetric Namespace Access Capabilities 00:19:43.624 ANA Optimized State : Supported 00:19:43.624 ANA Non-Optimized State : Supported 00:19:43.624 ANA Inaccessible State : Supported 00:19:43.624 ANA Persistent Loss State : Supported 00:19:43.624 ANA Change State : Supported 00:19:43.624 ANAGRPID is not changed : No 00:19:43.624 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:43.624 00:19:43.624 ANA Group Identifier Maximum : 128 00:19:43.624 Number of ANA Group Identifiers : 128 00:19:43.624 Max Number of Allowed Namespaces : 1024 00:19:43.624 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:43.624 Command Effects Log Page: Supported 00:19:43.624 Get Log Page Extended Data: Supported 00:19:43.624 Telemetry Log Pages: Not Supported 00:19:43.624 Persistent Event Log Pages: Not Supported 00:19:43.624 Supported Log Pages Log Page: May Support 00:19:43.624 Commands Supported & Effects Log Page: Not Supported 00:19:43.624 Feature Identifiers & Effects Log Page:May Support 00:19:43.624 NVMe-MI Commands & Effects Log Page: May Support 00:19:43.624 Data Area 4 for Telemetry Log: Not Supported 00:19:43.624 Error Log Page Entries Supported: 128 00:19:43.624 Keep Alive: Supported 00:19:43.624 Keep Alive Granularity: 1000 ms 00:19:43.624 00:19:43.624 NVM Command Set Attributes 00:19:43.624 ========================== 00:19:43.624 Submission Queue Entry Size 00:19:43.624 Max: 64 00:19:43.624 Min: 64 00:19:43.624 Completion Queue Entry Size 00:19:43.624 Max: 16 00:19:43.624 Min: 16 00:19:43.624 Number of Namespaces: 1024 00:19:43.624 Compare Command: Not Supported 00:19:43.624 Write Uncorrectable Command: Not Supported 00:19:43.624 Dataset Management Command: Supported 00:19:43.624 Write Zeroes Command: Supported 00:19:43.624 Set Features Save Field: Not Supported 00:19:43.624 Reservations: Not Supported 00:19:43.624 Timestamp: Not Supported 00:19:43.624 Copy: Not Supported 00:19:43.624 Volatile Write Cache: Present 00:19:43.624 Atomic Write Unit (Normal): 1 00:19:43.624 Atomic Write Unit (PFail): 1 00:19:43.624 Atomic Compare & Write Unit: 1 00:19:43.624 Fused Compare & Write: Not Supported 00:19:43.624 Scatter-Gather List 00:19:43.624 SGL Command Set: Supported 00:19:43.624 SGL Keyed: Not Supported 00:19:43.624 SGL Bit Bucket Descriptor: Not Supported 00:19:43.624 SGL Metadata Pointer: Not Supported 00:19:43.624 Oversized SGL: Not Supported 00:19:43.624 SGL Metadata Address: Not Supported 00:19:43.624 SGL Offset: Supported 00:19:43.624 Transport SGL Data Block: Not Supported 00:19:43.624 Replay Protected Memory Block: Not Supported 00:19:43.624 00:19:43.624 Firmware Slot Information 00:19:43.624 ========================= 00:19:43.624 Active slot: 0 00:19:43.624 00:19:43.624 Asymmetric Namespace Access 00:19:43.624 =========================== 00:19:43.624 Change Count : 0 00:19:43.624 Number of ANA Group Descriptors : 1 00:19:43.624 ANA Group Descriptor : 0 00:19:43.624 ANA Group ID : 1 00:19:43.624 Number of NSID Values : 1 00:19:43.624 Change Count : 0 00:19:43.624 ANA State : 1 00:19:43.624 Namespace Identifier : 1 00:19:43.624 00:19:43.624 Commands Supported and Effects 00:19:43.624 ============================== 00:19:43.624 Admin Commands 00:19:43.624 -------------- 00:19:43.624 Get Log Page (02h): Supported 00:19:43.624 Identify (06h): Supported 00:19:43.624 Abort (08h): Supported 00:19:43.624 Set Features (09h): Supported 00:19:43.624 Get Features (0Ah): Supported 00:19:43.624 Asynchronous Event Request (0Ch): Supported 00:19:43.624 Keep Alive (18h): Supported 00:19:43.624 I/O Commands 00:19:43.624 ------------ 00:19:43.624 Flush (00h): Supported 00:19:43.624 Write (01h): Supported LBA-Change 00:19:43.624 Read (02h): Supported 00:19:43.624 Write Zeroes (08h): Supported LBA-Change 00:19:43.624 Dataset Management (09h): Supported 00:19:43.624 00:19:43.624 Error Log 00:19:43.624 ========= 00:19:43.624 Entry: 0 00:19:43.624 Error Count: 0x3 00:19:43.624 Submission Queue Id: 0x0 00:19:43.624 Command Id: 0x5 00:19:43.624 Phase Bit: 0 00:19:43.624 Status Code: 0x2 00:19:43.624 Status Code Type: 0x0 00:19:43.624 Do Not Retry: 1 00:19:43.624 Error Location: 0x28 00:19:43.624 LBA: 0x0 00:19:43.624 Namespace: 0x0 00:19:43.624 Vendor Log Page: 0x0 00:19:43.624 ----------- 00:19:43.624 Entry: 1 00:19:43.624 Error Count: 0x2 00:19:43.624 Submission Queue Id: 0x0 00:19:43.624 Command Id: 0x5 00:19:43.624 Phase Bit: 0 00:19:43.624 Status Code: 0x2 00:19:43.624 Status Code Type: 0x0 00:19:43.624 Do Not Retry: 1 00:19:43.624 Error Location: 0x28 00:19:43.624 LBA: 0x0 00:19:43.624 Namespace: 0x0 00:19:43.624 Vendor Log Page: 0x0 00:19:43.624 ----------- 00:19:43.624 Entry: 2 00:19:43.624 Error Count: 0x1 00:19:43.624 Submission Queue Id: 0x0 00:19:43.624 Command Id: 0x4 00:19:43.624 Phase Bit: 0 00:19:43.624 Status Code: 0x2 00:19:43.624 Status Code Type: 0x0 00:19:43.624 Do Not Retry: 1 00:19:43.624 Error Location: 0x28 00:19:43.624 LBA: 0x0 00:19:43.624 Namespace: 0x0 00:19:43.624 Vendor Log Page: 0x0 00:19:43.624 00:19:43.624 Number of Queues 00:19:43.624 ================ 00:19:43.624 Number of I/O Submission Queues: 128 00:19:43.624 Number of I/O Completion Queues: 128 00:19:43.624 00:19:43.624 ZNS Specific Controller Data 00:19:43.625 ============================ 00:19:43.625 Zone Append Size Limit: 0 00:19:43.625 00:19:43.625 00:19:43.625 Active Namespaces 00:19:43.625 ================= 00:19:43.625 get_feature(0x05) failed 00:19:43.625 Namespace ID:1 00:19:43.625 Command Set Identifier: NVM (00h) 00:19:43.625 Deallocate: Supported 00:19:43.625 Deallocated/Unwritten Error: Not Supported 00:19:43.625 Deallocated Read Value: Unknown 00:19:43.625 Deallocate in Write Zeroes: Not Supported 00:19:43.625 Deallocated Guard Field: 0xFFFF 00:19:43.625 Flush: Supported 00:19:43.625 Reservation: Not Supported 00:19:43.625 Namespace Sharing Capabilities: Multiple Controllers 00:19:43.625 Size (in LBAs): 3907029168 (1863GiB) 00:19:43.625 Capacity (in LBAs): 3907029168 (1863GiB) 00:19:43.625 Utilization (in LBAs): 3907029168 (1863GiB) 00:19:43.625 UUID: 18294e71-7e5d-4262-a5d9-212e5a960fe8 00:19:43.625 Thin Provisioning: Not Supported 00:19:43.625 Per-NS Atomic Units: Yes 00:19:43.625 Atomic Boundary Size (Normal): 0 00:19:43.625 Atomic Boundary Size (PFail): 0 00:19:43.625 Atomic Boundary Offset: 0 00:19:43.625 NGUID/EUI64 Never Reused: No 00:19:43.625 ANA group ID: 1 00:19:43.625 Namespace Write Protected: No 00:19:43.625 Number of LBA Formats: 1 00:19:43.625 Current LBA Format: LBA Format #00 00:19:43.625 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:43.625 00:19:43.625 12:47:42 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:43.625 12:47:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:43.625 12:47:42 -- nvmf/common.sh@117 -- # sync 00:19:43.625 12:47:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.625 12:47:42 -- nvmf/common.sh@120 -- # set +e 00:19:43.625 12:47:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.625 12:47:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.625 rmmod nvme_tcp 00:19:43.625 rmmod nvme_fabrics 00:19:43.625 12:47:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.625 12:47:42 -- nvmf/common.sh@124 -- # set -e 00:19:43.625 12:47:42 -- nvmf/common.sh@125 -- # return 0 00:19:43.625 12:47:42 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:43.625 12:47:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:43.625 12:47:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:43.625 12:47:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:43.625 12:47:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.625 12:47:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.625 12:47:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.625 12:47:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.625 12:47:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.527 12:47:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.527 12:47:44 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:45.527 12:47:44 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:45.527 12:47:44 -- nvmf/common.sh@675 -- # echo 0 00:19:45.527 12:47:44 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:45.527 12:47:44 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:45.527 12:47:44 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:45.527 12:47:44 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:45.785 12:47:44 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:45.785 12:47:44 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:45.785 12:47:44 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:47.162 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:47.162 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:47.162 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:47.162 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:47.162 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:47.162 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:47.162 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:47.162 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:47.162 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:47.162 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:47.162 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:47.162 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:47.162 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:47.162 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:47.162 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:47.162 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:49.063 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:19:49.063 00:19:49.063 real 0m11.244s 00:19:49.063 user 0m2.342s 00:19:49.063 sys 0m3.997s 00:19:49.063 12:47:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:49.063 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.063 ************************************ 00:19:49.063 END TEST nvmf_identify_kernel_target 00:19:49.063 ************************************ 00:19:49.063 12:47:48 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:49.063 12:47:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:49.063 12:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:49.063 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.323 ************************************ 00:19:49.323 START TEST nvmf_auth 00:19:49.323 ************************************ 00:19:49.323 12:47:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:49.323 * Looking for test storage... 00:19:49.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:49.323 12:47:48 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.323 12:47:48 -- nvmf/common.sh@7 -- # uname -s 00:19:49.323 12:47:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.323 12:47:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.323 12:47:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.323 12:47:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.323 12:47:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.323 12:47:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.323 12:47:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.323 12:47:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.323 12:47:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.323 12:47:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.323 12:47:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:49.323 12:47:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:49.323 12:47:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.323 12:47:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.323 12:47:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.323 12:47:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.323 12:47:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.323 12:47:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.323 12:47:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.323 12:47:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.323 12:47:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.323 12:47:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.323 12:47:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.323 12:47:48 -- paths/export.sh@5 -- # export PATH 00:19:49.323 12:47:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.323 12:47:48 -- nvmf/common.sh@47 -- # : 0 00:19:49.323 12:47:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.323 12:47:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.323 12:47:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.323 12:47:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.323 12:47:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.323 12:47:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.323 12:47:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.323 12:47:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.323 12:47:48 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:49.323 12:47:48 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:49.323 12:47:48 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:49.323 12:47:48 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:49.323 12:47:48 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.323 12:47:48 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:49.323 12:47:48 -- host/auth.sh@21 -- # keys=() 00:19:49.323 12:47:48 -- host/auth.sh@77 -- # nvmftestinit 00:19:49.324 12:47:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:49.324 12:47:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.324 12:47:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:49.324 12:47:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:49.324 12:47:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:49.324 12:47:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.324 12:47:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.324 12:47:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.324 12:47:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:49.324 12:47:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:49.324 12:47:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.324 12:47:48 -- common/autotest_common.sh@10 -- # set +x 00:19:51.857 12:47:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:51.857 12:47:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.857 12:47:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.857 12:47:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.857 12:47:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.857 12:47:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.857 12:47:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.857 12:47:50 -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.857 12:47:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.857 12:47:50 -- nvmf/common.sh@296 -- # e810=() 00:19:51.857 12:47:50 -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.857 12:47:50 -- nvmf/common.sh@297 -- # x722=() 00:19:51.857 12:47:50 -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.857 12:47:50 -- nvmf/common.sh@298 -- # mlx=() 00:19:51.857 12:47:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.857 12:47:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.857 12:47:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.857 12:47:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.857 12:47:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.857 12:47:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.857 12:47:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:19:51.857 Found 0000:82:00.0 (0x8086 - 0x159b) 00:19:51.857 12:47:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.857 12:47:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:19:51.857 Found 0000:82:00.1 (0x8086 - 0x159b) 00:19:51.857 12:47:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.857 12:47:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.857 12:47:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.857 12:47:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:51.857 12:47:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.857 12:47:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:19:51.857 Found net devices under 0000:82:00.0: cvl_0_0 00:19:51.857 12:47:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.857 12:47:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.857 12:47:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.857 12:47:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:51.857 12:47:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.857 12:47:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:19:51.857 Found net devices under 0000:82:00.1: cvl_0_1 00:19:51.857 12:47:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.857 12:47:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:51.857 12:47:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:51.857 12:47:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:51.857 12:47:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:51.857 12:47:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.857 12:47:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.857 12:47:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.857 12:47:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.857 12:47:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.857 12:47:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.857 12:47:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.857 12:47:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.857 12:47:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.857 12:47:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.857 12:47:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.857 12:47:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.857 12:47:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.857 12:47:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.857 12:47:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.857 12:47:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.857 12:47:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.857 12:47:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.857 12:47:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.857 12:47:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:19:51.857 00:19:51.857 --- 10.0.0.2 ping statistics --- 00:19:51.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.858 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:51.858 12:47:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:19:51.858 00:19:51.858 --- 10.0.0.1 ping statistics --- 00:19:51.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.858 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:19:51.858 12:47:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.858 12:47:50 -- nvmf/common.sh@411 -- # return 0 00:19:51.858 12:47:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:51.858 12:47:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.858 12:47:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:51.858 12:47:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:51.858 12:47:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.858 12:47:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:51.858 12:47:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:51.858 12:47:50 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:19:51.858 12:47:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:51.858 12:47:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:51.858 12:47:50 -- common/autotest_common.sh@10 -- # set +x 00:19:51.858 12:47:50 -- nvmf/common.sh@470 -- # nvmfpid=1247449 00:19:51.858 12:47:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:51.858 12:47:50 -- nvmf/common.sh@471 -- # waitforlisten 1247449 00:19:51.858 12:47:50 -- common/autotest_common.sh@817 -- # '[' -z 1247449 ']' 00:19:51.858 12:47:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.858 12:47:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:51.858 12:47:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.858 12:47:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:51.858 12:47:50 -- common/autotest_common.sh@10 -- # set +x 00:19:53.232 12:47:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.232 12:47:51 -- common/autotest_common.sh@850 -- # return 0 00:19:53.232 12:47:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:53.232 12:47:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:53.232 12:47:51 -- common/autotest_common.sh@10 -- # set +x 00:19:53.232 12:47:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.232 12:47:51 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:53.232 12:47:51 -- host/auth.sh@81 -- # gen_key null 32 00:19:53.232 12:47:51 -- host/auth.sh@53 -- # local digest len file key 00:19:53.232 12:47:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.232 12:47:51 -- host/auth.sh@54 -- # local -A digests 00:19:53.232 12:47:51 -- host/auth.sh@56 -- # digest=null 00:19:53.232 12:47:51 -- host/auth.sh@56 -- # len=32 00:19:53.232 12:47:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:53.232 12:47:51 -- host/auth.sh@57 -- # key=a274f8f43af8f04e36a8013da5a0f9b2 00:19:53.232 12:47:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:53.232 12:47:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.0BX 00:19:53.232 12:47:51 -- host/auth.sh@59 -- # format_dhchap_key a274f8f43af8f04e36a8013da5a0f9b2 0 00:19:53.232 12:47:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 a274f8f43af8f04e36a8013da5a0f9b2 0 00:19:53.232 12:47:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:53.232 12:47:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:53.232 12:47:51 -- nvmf/common.sh@693 -- # key=a274f8f43af8f04e36a8013da5a0f9b2 00:19:53.232 12:47:51 -- nvmf/common.sh@693 -- # digest=0 00:19:53.232 12:47:51 -- nvmf/common.sh@694 -- # python - 00:19:53.232 12:47:51 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.0BX 00:19:53.232 12:47:51 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.0BX 00:19:53.232 12:47:51 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.0BX 00:19:53.232 12:47:51 -- host/auth.sh@82 -- # gen_key null 48 00:19:53.232 12:47:51 -- host/auth.sh@53 -- # local digest len file key 00:19:53.232 12:47:51 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.232 12:47:51 -- host/auth.sh@54 -- # local -A digests 00:19:53.232 12:47:51 -- host/auth.sh@56 -- # digest=null 00:19:53.232 12:47:51 -- host/auth.sh@56 -- # len=48 00:19:53.232 12:47:51 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:53.232 12:47:51 -- host/auth.sh@57 -- # key=c07dedc9c20b3ed05dce31581ca4195c9563684aacd67a25 00:19:53.232 12:47:51 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:53.232 12:47:51 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.5Kh 00:19:53.232 12:47:51 -- host/auth.sh@59 -- # format_dhchap_key c07dedc9c20b3ed05dce31581ca4195c9563684aacd67a25 0 00:19:53.232 12:47:51 -- nvmf/common.sh@708 -- # format_key DHHC-1 c07dedc9c20b3ed05dce31581ca4195c9563684aacd67a25 0 00:19:53.232 12:47:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:53.232 12:47:51 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:53.232 12:47:51 -- nvmf/common.sh@693 -- # key=c07dedc9c20b3ed05dce31581ca4195c9563684aacd67a25 00:19:53.232 12:47:51 -- nvmf/common.sh@693 -- # digest=0 00:19:53.232 12:47:51 -- nvmf/common.sh@694 -- # python - 00:19:53.232 12:47:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.5Kh 00:19:53.232 12:47:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.5Kh 00:19:53.232 12:47:52 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.5Kh 00:19:53.232 12:47:52 -- host/auth.sh@83 -- # gen_key sha256 32 00:19:53.232 12:47:52 -- host/auth.sh@53 -- # local digest len file key 00:19:53.232 12:47:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.232 12:47:52 -- host/auth.sh@54 -- # local -A digests 00:19:53.232 12:47:52 -- host/auth.sh@56 -- # digest=sha256 00:19:53.232 12:47:52 -- host/auth.sh@56 -- # len=32 00:19:53.232 12:47:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:53.232 12:47:52 -- host/auth.sh@57 -- # key=4428790190d0632f026a53acac2845a5 00:19:53.232 12:47:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:19:53.232 12:47:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.H5m 00:19:53.232 12:47:52 -- host/auth.sh@59 -- # format_dhchap_key 4428790190d0632f026a53acac2845a5 1 00:19:53.232 12:47:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 4428790190d0632f026a53acac2845a5 1 00:19:53.232 12:47:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:53.232 12:47:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:53.232 12:47:52 -- nvmf/common.sh@693 -- # key=4428790190d0632f026a53acac2845a5 00:19:53.232 12:47:52 -- nvmf/common.sh@693 -- # digest=1 00:19:53.232 12:47:52 -- nvmf/common.sh@694 -- # python - 00:19:53.232 12:47:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.H5m 00:19:53.232 12:47:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.H5m 00:19:53.233 12:47:52 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.H5m 00:19:53.233 12:47:52 -- host/auth.sh@84 -- # gen_key sha384 48 00:19:53.233 12:47:52 -- host/auth.sh@53 -- # local digest len file key 00:19:53.233 12:47:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.233 12:47:52 -- host/auth.sh@54 -- # local -A digests 00:19:53.233 12:47:52 -- host/auth.sh@56 -- # digest=sha384 00:19:53.233 12:47:52 -- host/auth.sh@56 -- # len=48 00:19:53.233 12:47:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:53.233 12:47:52 -- host/auth.sh@57 -- # key=cab3fde1b506ec688c0ad5608cee07343025bcf279e93a9c 00:19:53.233 12:47:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:19:53.233 12:47:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.Ljw 00:19:53.233 12:47:52 -- host/auth.sh@59 -- # format_dhchap_key cab3fde1b506ec688c0ad5608cee07343025bcf279e93a9c 2 00:19:53.233 12:47:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 cab3fde1b506ec688c0ad5608cee07343025bcf279e93a9c 2 00:19:53.233 12:47:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:53.233 12:47:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:53.233 12:47:52 -- nvmf/common.sh@693 -- # key=cab3fde1b506ec688c0ad5608cee07343025bcf279e93a9c 00:19:53.233 12:47:52 -- nvmf/common.sh@693 -- # digest=2 00:19:53.233 12:47:52 -- nvmf/common.sh@694 -- # python - 00:19:53.233 12:47:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.Ljw 00:19:53.233 12:47:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.Ljw 00:19:53.233 12:47:52 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.Ljw 00:19:53.233 12:47:52 -- host/auth.sh@85 -- # gen_key sha512 64 00:19:53.233 12:47:52 -- host/auth.sh@53 -- # local digest len file key 00:19:53.233 12:47:52 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.233 12:47:52 -- host/auth.sh@54 -- # local -A digests 00:19:53.233 12:47:52 -- host/auth.sh@56 -- # digest=sha512 00:19:53.233 12:47:52 -- host/auth.sh@56 -- # len=64 00:19:53.233 12:47:52 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:53.233 12:47:52 -- host/auth.sh@57 -- # key=cf920045cb8cc1028c9ed404040b2ad465304376ee9a542235c58c723ce34c04 00:19:53.233 12:47:52 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:19:53.233 12:47:52 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.PQw 00:19:53.233 12:47:52 -- host/auth.sh@59 -- # format_dhchap_key cf920045cb8cc1028c9ed404040b2ad465304376ee9a542235c58c723ce34c04 3 00:19:53.233 12:47:52 -- nvmf/common.sh@708 -- # format_key DHHC-1 cf920045cb8cc1028c9ed404040b2ad465304376ee9a542235c58c723ce34c04 3 00:19:53.233 12:47:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:53.233 12:47:52 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:53.233 12:47:52 -- nvmf/common.sh@693 -- # key=cf920045cb8cc1028c9ed404040b2ad465304376ee9a542235c58c723ce34c04 00:19:53.233 12:47:52 -- nvmf/common.sh@693 -- # digest=3 00:19:53.233 12:47:52 -- nvmf/common.sh@694 -- # python - 00:19:53.233 12:47:52 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.PQw 00:19:53.233 12:47:52 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.PQw 00:19:53.233 12:47:52 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.PQw 00:19:53.233 12:47:52 -- host/auth.sh@87 -- # waitforlisten 1247449 00:19:53.233 12:47:52 -- common/autotest_common.sh@817 -- # '[' -z 1247449 ']' 00:19:53.233 12:47:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.233 12:47:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.233 12:47:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.233 12:47:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.233 12:47:52 -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 12:47:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.490 12:47:52 -- common/autotest_common.sh@850 -- # return 0 00:19:53.490 12:47:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:53.490 12:47:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0BX 00:19:53.490 12:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.490 12:47:52 -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 12:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.490 12:47:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:53.490 12:47:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5Kh 00:19:53.490 12:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.490 12:47:52 -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 12:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.490 12:47:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:53.490 12:47:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.H5m 00:19:53.490 12:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.490 12:47:52 -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 12:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.490 12:47:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:53.490 12:47:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ljw 00:19:53.490 12:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.490 12:47:52 -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 12:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.490 12:47:52 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:53.490 12:47:52 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PQw 00:19:53.490 12:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.490 12:47:52 -- common/autotest_common.sh@10 -- # set +x 00:19:53.490 12:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.490 12:47:52 -- host/auth.sh@92 -- # nvmet_auth_init 00:19:53.490 12:47:52 -- host/auth.sh@35 -- # get_main_ns_ip 00:19:53.490 12:47:52 -- nvmf/common.sh@717 -- # local ip 00:19:53.490 12:47:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:53.490 12:47:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:53.490 12:47:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.491 12:47:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.491 12:47:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:53.491 12:47:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.491 12:47:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:53.491 12:47:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:53.491 12:47:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:53.491 12:47:52 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:53.491 12:47:52 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:53.491 12:47:52 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:53.491 12:47:52 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:53.491 12:47:52 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:53.491 12:47:52 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:53.491 12:47:52 -- nvmf/common.sh@628 -- # local block nvme 00:19:53.491 12:47:52 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:53.491 12:47:52 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:53.491 12:47:52 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:53.491 12:47:52 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:54.863 Waiting for block devices as requested 00:19:54.863 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:19:54.863 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:55.120 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:55.120 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:55.120 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:55.379 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:55.379 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:55.379 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:55.379 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:55.636 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:55.636 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:55.636 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:55.636 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:55.894 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:55.894 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:55.894 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:55.894 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:56.461 12:47:55 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:56.461 12:47:55 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:56.461 12:47:55 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:56.461 12:47:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:56.461 12:47:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:56.461 12:47:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:56.461 12:47:55 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:56.461 12:47:55 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:56.461 12:47:55 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:56.461 No valid GPT data, bailing 00:19:56.461 12:47:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:56.461 12:47:55 -- scripts/common.sh@391 -- # pt= 00:19:56.461 12:47:55 -- scripts/common.sh@392 -- # return 1 00:19:56.461 12:47:55 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:56.461 12:47:55 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:56.461 12:47:55 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:56.461 12:47:55 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:56.461 12:47:55 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:56.461 12:47:55 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:56.461 12:47:55 -- nvmf/common.sh@656 -- # echo 1 00:19:56.461 12:47:55 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:56.461 12:47:55 -- nvmf/common.sh@658 -- # echo 1 00:19:56.461 12:47:55 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:19:56.461 12:47:55 -- nvmf/common.sh@661 -- # echo tcp 00:19:56.461 12:47:55 -- nvmf/common.sh@662 -- # echo 4420 00:19:56.461 12:47:55 -- nvmf/common.sh@663 -- # echo ipv4 00:19:56.461 12:47:55 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:56.461 12:47:55 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:19:56.461 00:19:56.461 Discovery Log Number of Records 2, Generation counter 2 00:19:56.461 =====Discovery Log Entry 0====== 00:19:56.461 trtype: tcp 00:19:56.461 adrfam: ipv4 00:19:56.461 subtype: current discovery subsystem 00:19:56.461 treq: not specified, sq flow control disable supported 00:19:56.461 portid: 1 00:19:56.461 trsvcid: 4420 00:19:56.461 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:56.461 traddr: 10.0.0.1 00:19:56.461 eflags: none 00:19:56.461 sectype: none 00:19:56.461 =====Discovery Log Entry 1====== 00:19:56.461 trtype: tcp 00:19:56.461 adrfam: ipv4 00:19:56.461 subtype: nvme subsystem 00:19:56.461 treq: not specified, sq flow control disable supported 00:19:56.461 portid: 1 00:19:56.461 trsvcid: 4420 00:19:56.461 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:56.461 traddr: 10.0.0.1 00:19:56.461 eflags: none 00:19:56.461 sectype: none 00:19:56.461 12:47:55 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:56.461 12:47:55 -- host/auth.sh@37 -- # echo 0 00:19:56.461 12:47:55 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:56.461 12:47:55 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:56.461 12:47:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.461 12:47:55 -- host/auth.sh@44 -- # digest=sha256 00:19:56.461 12:47:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:56.461 12:47:55 -- host/auth.sh@44 -- # keyid=1 00:19:56.461 12:47:55 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:19:56.461 12:47:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:56.461 12:47:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:56.461 12:47:55 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:19:56.461 12:47:55 -- host/auth.sh@100 -- # IFS=, 00:19:56.461 12:47:55 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:19:56.461 12:47:55 -- host/auth.sh@100 -- # IFS=, 00:19:56.461 12:47:55 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.461 12:47:55 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:56.461 12:47:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.461 12:47:55 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:19:56.461 12:47:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.461 12:47:55 -- host/auth.sh@68 -- # keyid=1 00:19:56.461 12:47:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:56.461 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.461 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.719 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.719 12:47:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.719 12:47:55 -- nvmf/common.sh@717 -- # local ip 00:19:56.719 12:47:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.719 12:47:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.719 12:47:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.719 12:47:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.719 12:47:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:56.719 12:47:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.719 12:47:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:56.719 12:47:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:56.719 12:47:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:56.720 12:47:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:56.720 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.720 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.720 nvme0n1 00:19:56.720 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.720 12:47:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.720 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.720 12:47:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.720 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.720 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.720 12:47:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.720 12:47:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.720 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.720 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.720 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.720 12:47:55 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:56.720 12:47:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.720 12:47:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.720 12:47:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:56.720 12:47:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.720 12:47:55 -- host/auth.sh@44 -- # digest=sha256 00:19:56.720 12:47:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:56.720 12:47:55 -- host/auth.sh@44 -- # keyid=0 00:19:56.720 12:47:55 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:19:56.720 12:47:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:56.720 12:47:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:56.720 12:47:55 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:19:56.720 12:47:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:19:56.720 12:47:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.720 12:47:55 -- host/auth.sh@68 -- # digest=sha256 00:19:56.720 12:47:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:56.720 12:47:55 -- host/auth.sh@68 -- # keyid=0 00:19:56.720 12:47:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.720 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.720 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.720 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.720 12:47:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.720 12:47:55 -- nvmf/common.sh@717 -- # local ip 00:19:56.720 12:47:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.720 12:47:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.720 12:47:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.720 12:47:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.720 12:47:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:56.720 12:47:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.720 12:47:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:56.720 12:47:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:56.720 12:47:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:56.720 12:47:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:56.720 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.720 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.978 nvme0n1 00:19:56.978 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.978 12:47:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.978 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.978 12:47:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.978 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.978 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.978 12:47:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.978 12:47:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.978 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.978 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.978 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.978 12:47:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.978 12:47:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:56.978 12:47:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.978 12:47:55 -- host/auth.sh@44 -- # digest=sha256 00:19:56.978 12:47:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:56.978 12:47:55 -- host/auth.sh@44 -- # keyid=1 00:19:56.978 12:47:55 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:19:56.978 12:47:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:56.978 12:47:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:56.978 12:47:55 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:19:56.978 12:47:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:19:56.978 12:47:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.978 12:47:55 -- host/auth.sh@68 -- # digest=sha256 00:19:56.978 12:47:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:56.978 12:47:55 -- host/auth.sh@68 -- # keyid=1 00:19:56.978 12:47:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.978 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.978 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.978 12:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.978 12:47:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.978 12:47:55 -- nvmf/common.sh@717 -- # local ip 00:19:56.978 12:47:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.978 12:47:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.978 12:47:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.978 12:47:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.978 12:47:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:56.978 12:47:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.978 12:47:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:56.978 12:47:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:56.978 12:47:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:56.978 12:47:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:56.978 12:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.978 12:47:55 -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 nvme0n1 00:19:57.234 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.234 12:47:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.234 12:47:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.234 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.234 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.234 12:47:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.234 12:47:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.234 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.234 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.234 12:47:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.234 12:47:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:57.234 12:47:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.234 12:47:56 -- host/auth.sh@44 -- # digest=sha256 00:19:57.234 12:47:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.234 12:47:56 -- host/auth.sh@44 -- # keyid=2 00:19:57.234 12:47:56 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:19:57.234 12:47:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:57.234 12:47:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:57.234 12:47:56 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:19:57.234 12:47:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:19:57.234 12:47:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.234 12:47:56 -- host/auth.sh@68 -- # digest=sha256 00:19:57.234 12:47:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:57.234 12:47:56 -- host/auth.sh@68 -- # keyid=2 00:19:57.234 12:47:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.234 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.234 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.234 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.234 12:47:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.234 12:47:56 -- nvmf/common.sh@717 -- # local ip 00:19:57.234 12:47:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.234 12:47:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.234 12:47:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.234 12:47:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.234 12:47:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:57.234 12:47:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.234 12:47:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:57.234 12:47:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:57.234 12:47:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:57.234 12:47:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:57.234 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.234 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 nvme0n1 00:19:57.492 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.492 12:47:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.492 12:47:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.492 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.492 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.492 12:47:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.492 12:47:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.492 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.492 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.492 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.492 12:47:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.492 12:47:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:57.492 12:47:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.493 12:47:56 -- host/auth.sh@44 -- # digest=sha256 00:19:57.493 12:47:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.493 12:47:56 -- host/auth.sh@44 -- # keyid=3 00:19:57.493 12:47:56 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:19:57.493 12:47:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:57.493 12:47:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:57.493 12:47:56 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:19:57.493 12:47:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:19:57.493 12:47:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.493 12:47:56 -- host/auth.sh@68 -- # digest=sha256 00:19:57.493 12:47:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:57.493 12:47:56 -- host/auth.sh@68 -- # keyid=3 00:19:57.493 12:47:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.493 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.493 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.493 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.493 12:47:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.493 12:47:56 -- nvmf/common.sh@717 -- # local ip 00:19:57.493 12:47:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.493 12:47:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.493 12:47:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.493 12:47:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.493 12:47:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:57.493 12:47:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.493 12:47:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:57.493 12:47:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:57.493 12:47:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:57.493 12:47:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:57.493 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.493 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.751 nvme0n1 00:19:57.751 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.751 12:47:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.751 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.751 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.751 12:47:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.751 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.751 12:47:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.751 12:47:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.751 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.751 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.751 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.751 12:47:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.751 12:47:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:57.751 12:47:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.751 12:47:56 -- host/auth.sh@44 -- # digest=sha256 00:19:57.751 12:47:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.751 12:47:56 -- host/auth.sh@44 -- # keyid=4 00:19:57.751 12:47:56 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:19:57.751 12:47:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:57.751 12:47:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:57.751 12:47:56 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:19:57.751 12:47:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:19:57.751 12:47:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.751 12:47:56 -- host/auth.sh@68 -- # digest=sha256 00:19:57.751 12:47:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:57.751 12:47:56 -- host/auth.sh@68 -- # keyid=4 00:19:57.751 12:47:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.751 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.751 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.751 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.751 12:47:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.752 12:47:56 -- nvmf/common.sh@717 -- # local ip 00:19:57.752 12:47:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.752 12:47:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.752 12:47:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.752 12:47:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.752 12:47:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:57.752 12:47:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.752 12:47:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:57.752 12:47:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:57.752 12:47:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:57.752 12:47:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.752 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.752 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.010 nvme0n1 00:19:58.010 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.010 12:47:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.010 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.010 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.010 12:47:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.010 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.010 12:47:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.010 12:47:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.010 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.010 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.010 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.010 12:47:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.010 12:47:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.010 12:47:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:58.010 12:47:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.010 12:47:56 -- host/auth.sh@44 -- # digest=sha256 00:19:58.010 12:47:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.010 12:47:56 -- host/auth.sh@44 -- # keyid=0 00:19:58.010 12:47:56 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:19:58.010 12:47:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:58.010 12:47:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:58.010 12:47:56 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:19:58.010 12:47:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:19:58.010 12:47:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.010 12:47:56 -- host/auth.sh@68 -- # digest=sha256 00:19:58.010 12:47:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:58.010 12:47:56 -- host/auth.sh@68 -- # keyid=0 00:19:58.010 12:47:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.010 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.010 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.010 12:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.010 12:47:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.010 12:47:56 -- nvmf/common.sh@717 -- # local ip 00:19:58.010 12:47:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.010 12:47:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.010 12:47:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.010 12:47:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.010 12:47:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:58.010 12:47:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.010 12:47:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:58.010 12:47:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:58.010 12:47:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:58.010 12:47:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:58.010 12:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.010 12:47:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.269 nvme0n1 00:19:58.269 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.269 12:47:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.269 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.269 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.269 12:47:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.269 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.269 12:47:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.269 12:47:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.269 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.269 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.269 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.269 12:47:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.269 12:47:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:58.269 12:47:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.269 12:47:57 -- host/auth.sh@44 -- # digest=sha256 00:19:58.269 12:47:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.269 12:47:57 -- host/auth.sh@44 -- # keyid=1 00:19:58.269 12:47:57 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:19:58.269 12:47:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:58.269 12:47:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:58.269 12:47:57 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:19:58.269 12:47:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:19:58.269 12:47:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.269 12:47:57 -- host/auth.sh@68 -- # digest=sha256 00:19:58.269 12:47:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:58.269 12:47:57 -- host/auth.sh@68 -- # keyid=1 00:19:58.269 12:47:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.269 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.269 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.269 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.269 12:47:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.269 12:47:57 -- nvmf/common.sh@717 -- # local ip 00:19:58.269 12:47:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.269 12:47:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.269 12:47:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.269 12:47:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.269 12:47:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:58.269 12:47:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.269 12:47:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:58.269 12:47:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:58.269 12:47:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:58.269 12:47:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:58.269 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.269 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.529 nvme0n1 00:19:58.529 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.529 12:47:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.529 12:47:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.529 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.529 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.529 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.529 12:47:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.529 12:47:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.529 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.529 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.529 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.529 12:47:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.529 12:47:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:58.529 12:47:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.529 12:47:57 -- host/auth.sh@44 -- # digest=sha256 00:19:58.529 12:47:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.529 12:47:57 -- host/auth.sh@44 -- # keyid=2 00:19:58.529 12:47:57 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:19:58.529 12:47:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:58.529 12:47:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:58.529 12:47:57 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:19:58.529 12:47:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:19:58.529 12:47:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.529 12:47:57 -- host/auth.sh@68 -- # digest=sha256 00:19:58.529 12:47:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:58.529 12:47:57 -- host/auth.sh@68 -- # keyid=2 00:19:58.529 12:47:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.529 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.529 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.529 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.529 12:47:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.529 12:47:57 -- nvmf/common.sh@717 -- # local ip 00:19:58.529 12:47:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.529 12:47:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.529 12:47:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.529 12:47:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.529 12:47:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:58.529 12:47:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.529 12:47:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:58.529 12:47:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:58.529 12:47:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:58.529 12:47:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:58.529 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.529 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.789 nvme0n1 00:19:58.789 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.789 12:47:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.789 12:47:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.789 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.789 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.789 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.789 12:47:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.789 12:47:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.789 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.789 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.789 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.789 12:47:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.789 12:47:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:58.789 12:47:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.789 12:47:57 -- host/auth.sh@44 -- # digest=sha256 00:19:58.789 12:47:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.789 12:47:57 -- host/auth.sh@44 -- # keyid=3 00:19:58.789 12:47:57 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:19:58.789 12:47:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:58.789 12:47:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:58.789 12:47:57 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:19:58.789 12:47:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:19:58.789 12:47:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.789 12:47:57 -- host/auth.sh@68 -- # digest=sha256 00:19:58.789 12:47:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:58.789 12:47:57 -- host/auth.sh@68 -- # keyid=3 00:19:58.790 12:47:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.790 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.790 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.790 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.790 12:47:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.790 12:47:57 -- nvmf/common.sh@717 -- # local ip 00:19:58.790 12:47:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.790 12:47:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.790 12:47:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.790 12:47:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.790 12:47:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:58.790 12:47:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.790 12:47:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:58.790 12:47:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:58.790 12:47:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:58.790 12:47:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:58.790 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.790 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:59.061 nvme0n1 00:19:59.061 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.061 12:47:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.061 12:47:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.061 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.061 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:59.061 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.061 12:47:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.061 12:47:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.061 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.061 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:59.061 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.061 12:47:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.061 12:47:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:59.061 12:47:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.061 12:47:57 -- host/auth.sh@44 -- # digest=sha256 00:19:59.061 12:47:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.061 12:47:57 -- host/auth.sh@44 -- # keyid=4 00:19:59.061 12:47:57 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:19:59.061 12:47:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:59.061 12:47:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:59.061 12:47:57 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:19:59.061 12:47:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:19:59.061 12:47:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.061 12:47:57 -- host/auth.sh@68 -- # digest=sha256 00:19:59.061 12:47:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:59.061 12:47:57 -- host/auth.sh@68 -- # keyid=4 00:19:59.061 12:47:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.061 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.061 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:59.061 12:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.061 12:47:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.061 12:47:57 -- nvmf/common.sh@717 -- # local ip 00:19:59.061 12:47:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.061 12:47:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.061 12:47:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.061 12:47:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.061 12:47:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.061 12:47:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.061 12:47:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.061 12:47:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.061 12:47:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.061 12:47:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.061 12:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.061 12:47:57 -- common/autotest_common.sh@10 -- # set +x 00:19:59.319 nvme0n1 00:19:59.319 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.319 12:47:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.319 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.319 12:47:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.319 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.319 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.319 12:47:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.319 12:47:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.319 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.319 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.319 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.319 12:47:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.319 12:47:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.319 12:47:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:59.319 12:47:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.319 12:47:58 -- host/auth.sh@44 -- # digest=sha256 00:19:59.319 12:47:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.319 12:47:58 -- host/auth.sh@44 -- # keyid=0 00:19:59.319 12:47:58 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:19:59.319 12:47:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:59.319 12:47:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:59.319 12:47:58 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:19:59.319 12:47:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:19:59.319 12:47:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.319 12:47:58 -- host/auth.sh@68 -- # digest=sha256 00:19:59.319 12:47:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:59.319 12:47:58 -- host/auth.sh@68 -- # keyid=0 00:19:59.319 12:47:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.319 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.319 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.319 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.319 12:47:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.319 12:47:58 -- nvmf/common.sh@717 -- # local ip 00:19:59.319 12:47:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.319 12:47:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.319 12:47:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.319 12:47:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.319 12:47:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.319 12:47:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.319 12:47:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.319 12:47:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.319 12:47:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.319 12:47:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:59.319 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.319 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.576 nvme0n1 00:19:59.576 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.576 12:47:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.576 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.576 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.576 12:47:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.576 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.576 12:47:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.576 12:47:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.576 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.576 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.834 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.834 12:47:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.834 12:47:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:59.834 12:47:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.834 12:47:58 -- host/auth.sh@44 -- # digest=sha256 00:19:59.834 12:47:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.834 12:47:58 -- host/auth.sh@44 -- # keyid=1 00:19:59.834 12:47:58 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:19:59.834 12:47:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:59.834 12:47:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:59.834 12:47:58 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:19:59.834 12:47:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:19:59.834 12:47:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.834 12:47:58 -- host/auth.sh@68 -- # digest=sha256 00:19:59.834 12:47:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:59.834 12:47:58 -- host/auth.sh@68 -- # keyid=1 00:19:59.834 12:47:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.834 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.834 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.834 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.834 12:47:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.834 12:47:58 -- nvmf/common.sh@717 -- # local ip 00:19:59.834 12:47:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.834 12:47:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.834 12:47:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.834 12:47:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.834 12:47:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.834 12:47:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.834 12:47:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.834 12:47:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.834 12:47:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.834 12:47:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:59.834 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.834 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:20:00.092 nvme0n1 00:20:00.092 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.092 12:47:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.092 12:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.092 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:20:00.092 12:47:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.092 12:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.092 12:47:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.092 12:47:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.092 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.092 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.092 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.092 12:47:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.092 12:47:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:00.092 12:47:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.092 12:47:59 -- host/auth.sh@44 -- # digest=sha256 00:20:00.092 12:47:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.092 12:47:59 -- host/auth.sh@44 -- # keyid=2 00:20:00.092 12:47:59 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:00.092 12:47:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:00.092 12:47:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:00.092 12:47:59 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:00.092 12:47:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:20:00.092 12:47:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.092 12:47:59 -- host/auth.sh@68 -- # digest=sha256 00:20:00.092 12:47:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:00.092 12:47:59 -- host/auth.sh@68 -- # keyid=2 00:20:00.092 12:47:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.092 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.092 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.092 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.092 12:47:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.092 12:47:59 -- nvmf/common.sh@717 -- # local ip 00:20:00.092 12:47:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.092 12:47:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.092 12:47:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.092 12:47:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.092 12:47:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.092 12:47:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.092 12:47:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.093 12:47:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.093 12:47:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.093 12:47:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:00.093 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.093 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.351 nvme0n1 00:20:00.351 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.351 12:47:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.351 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.351 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.351 12:47:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.351 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.351 12:47:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.351 12:47:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.351 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.351 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.351 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.351 12:47:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.351 12:47:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:00.351 12:47:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.351 12:47:59 -- host/auth.sh@44 -- # digest=sha256 00:20:00.351 12:47:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.351 12:47:59 -- host/auth.sh@44 -- # keyid=3 00:20:00.351 12:47:59 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:00.351 12:47:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:00.351 12:47:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:00.351 12:47:59 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:00.351 12:47:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:20:00.351 12:47:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.351 12:47:59 -- host/auth.sh@68 -- # digest=sha256 00:20:00.351 12:47:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:00.351 12:47:59 -- host/auth.sh@68 -- # keyid=3 00:20:00.351 12:47:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.351 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.351 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.609 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.609 12:47:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.609 12:47:59 -- nvmf/common.sh@717 -- # local ip 00:20:00.609 12:47:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.609 12:47:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.609 12:47:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.609 12:47:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.609 12:47:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.609 12:47:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.609 12:47:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.609 12:47:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.609 12:47:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.609 12:47:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:00.609 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.609 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.867 nvme0n1 00:20:00.867 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.867 12:47:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.867 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.867 12:47:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.867 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.867 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.867 12:47:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.867 12:47:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.867 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.867 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.867 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.867 12:47:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.867 12:47:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:00.867 12:47:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.867 12:47:59 -- host/auth.sh@44 -- # digest=sha256 00:20:00.867 12:47:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.867 12:47:59 -- host/auth.sh@44 -- # keyid=4 00:20:00.867 12:47:59 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:00.867 12:47:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:00.867 12:47:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:00.867 12:47:59 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:00.867 12:47:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:20:00.867 12:47:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.867 12:47:59 -- host/auth.sh@68 -- # digest=sha256 00:20:00.867 12:47:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:00.867 12:47:59 -- host/auth.sh@68 -- # keyid=4 00:20:00.867 12:47:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.867 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.867 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:00.867 12:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.867 12:47:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.867 12:47:59 -- nvmf/common.sh@717 -- # local ip 00:20:00.867 12:47:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.867 12:47:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.867 12:47:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.867 12:47:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.867 12:47:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.867 12:47:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.867 12:47:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.867 12:47:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.867 12:47:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.867 12:47:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.867 12:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.867 12:47:59 -- common/autotest_common.sh@10 -- # set +x 00:20:01.126 nvme0n1 00:20:01.126 12:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.126 12:48:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.126 12:48:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:01.126 12:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.126 12:48:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.126 12:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.126 12:48:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.126 12:48:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.126 12:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.126 12:48:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.126 12:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.126 12:48:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.126 12:48:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:01.126 12:48:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:01.126 12:48:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:01.126 12:48:00 -- host/auth.sh@44 -- # digest=sha256 00:20:01.126 12:48:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.126 12:48:00 -- host/auth.sh@44 -- # keyid=0 00:20:01.126 12:48:00 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:01.126 12:48:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:01.126 12:48:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:01.126 12:48:00 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:01.126 12:48:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:20:01.126 12:48:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:01.126 12:48:00 -- host/auth.sh@68 -- # digest=sha256 00:20:01.126 12:48:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:01.126 12:48:00 -- host/auth.sh@68 -- # keyid=0 00:20:01.126 12:48:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.126 12:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.126 12:48:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.126 12:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.126 12:48:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:01.126 12:48:00 -- nvmf/common.sh@717 -- # local ip 00:20:01.126 12:48:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:01.126 12:48:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:01.126 12:48:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.126 12:48:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.126 12:48:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:01.126 12:48:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.126 12:48:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:01.126 12:48:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:01.126 12:48:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:01.126 12:48:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:01.126 12:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.126 12:48:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.692 nvme0n1 00:20:01.692 12:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.692 12:48:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.692 12:48:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:01.692 12:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.692 12:48:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.692 12:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.692 12:48:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.692 12:48:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.692 12:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.692 12:48:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.951 12:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.951 12:48:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:01.951 12:48:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:01.951 12:48:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:01.951 12:48:00 -- host/auth.sh@44 -- # digest=sha256 00:20:01.951 12:48:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.951 12:48:00 -- host/auth.sh@44 -- # keyid=1 00:20:01.951 12:48:00 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:01.951 12:48:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:01.951 12:48:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:01.951 12:48:00 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:01.951 12:48:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:20:01.951 12:48:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:01.951 12:48:00 -- host/auth.sh@68 -- # digest=sha256 00:20:01.951 12:48:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:01.951 12:48:00 -- host/auth.sh@68 -- # keyid=1 00:20:01.952 12:48:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.952 12:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.952 12:48:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.952 12:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.952 12:48:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:01.952 12:48:00 -- nvmf/common.sh@717 -- # local ip 00:20:01.952 12:48:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:01.952 12:48:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:01.952 12:48:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.952 12:48:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.952 12:48:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:01.952 12:48:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.952 12:48:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:01.952 12:48:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:01.952 12:48:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:01.952 12:48:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:01.952 12:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.952 12:48:00 -- common/autotest_common.sh@10 -- # set +x 00:20:02.518 nvme0n1 00:20:02.518 12:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.518 12:48:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.518 12:48:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:02.518 12:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.518 12:48:01 -- common/autotest_common.sh@10 -- # set +x 00:20:02.518 12:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.518 12:48:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.518 12:48:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.518 12:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.518 12:48:01 -- common/autotest_common.sh@10 -- # set +x 00:20:02.518 12:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.518 12:48:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:02.518 12:48:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:02.518 12:48:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:02.518 12:48:01 -- host/auth.sh@44 -- # digest=sha256 00:20:02.518 12:48:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.518 12:48:01 -- host/auth.sh@44 -- # keyid=2 00:20:02.518 12:48:01 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:02.518 12:48:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:02.518 12:48:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:02.518 12:48:01 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:02.518 12:48:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:20:02.518 12:48:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:02.518 12:48:01 -- host/auth.sh@68 -- # digest=sha256 00:20:02.518 12:48:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:02.518 12:48:01 -- host/auth.sh@68 -- # keyid=2 00:20:02.518 12:48:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.518 12:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.518 12:48:01 -- common/autotest_common.sh@10 -- # set +x 00:20:02.518 12:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.518 12:48:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:02.518 12:48:01 -- nvmf/common.sh@717 -- # local ip 00:20:02.518 12:48:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:02.518 12:48:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:02.518 12:48:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.518 12:48:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.518 12:48:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:02.518 12:48:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.518 12:48:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:02.518 12:48:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:02.518 12:48:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:02.518 12:48:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:02.518 12:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.518 12:48:01 -- common/autotest_common.sh@10 -- # set +x 00:20:03.084 nvme0n1 00:20:03.084 12:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.084 12:48:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.084 12:48:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:03.084 12:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.084 12:48:01 -- common/autotest_common.sh@10 -- # set +x 00:20:03.084 12:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.084 12:48:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.084 12:48:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.084 12:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.084 12:48:01 -- common/autotest_common.sh@10 -- # set +x 00:20:03.084 12:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.084 12:48:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:03.084 12:48:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:03.084 12:48:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:03.084 12:48:02 -- host/auth.sh@44 -- # digest=sha256 00:20:03.084 12:48:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:03.084 12:48:02 -- host/auth.sh@44 -- # keyid=3 00:20:03.084 12:48:02 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:03.084 12:48:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:03.084 12:48:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:03.084 12:48:02 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:03.084 12:48:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:20:03.084 12:48:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:03.084 12:48:02 -- host/auth.sh@68 -- # digest=sha256 00:20:03.084 12:48:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:03.084 12:48:02 -- host/auth.sh@68 -- # keyid=3 00:20:03.084 12:48:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.084 12:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.084 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:20:03.084 12:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.084 12:48:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:03.084 12:48:02 -- nvmf/common.sh@717 -- # local ip 00:20:03.084 12:48:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:03.084 12:48:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:03.084 12:48:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.084 12:48:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.084 12:48:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:03.084 12:48:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.084 12:48:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:03.084 12:48:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:03.084 12:48:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:03.084 12:48:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:03.084 12:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.084 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:20:03.650 nvme0n1 00:20:03.650 12:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.650 12:48:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.650 12:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.650 12:48:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:03.650 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:20:03.650 12:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.650 12:48:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.650 12:48:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.650 12:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.650 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:20:03.650 12:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.650 12:48:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:03.650 12:48:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:03.650 12:48:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:03.650 12:48:02 -- host/auth.sh@44 -- # digest=sha256 00:20:03.650 12:48:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:03.650 12:48:02 -- host/auth.sh@44 -- # keyid=4 00:20:03.650 12:48:02 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:03.650 12:48:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:03.650 12:48:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:03.650 12:48:02 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:03.650 12:48:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:20:03.650 12:48:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:03.650 12:48:02 -- host/auth.sh@68 -- # digest=sha256 00:20:03.650 12:48:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:03.650 12:48:02 -- host/auth.sh@68 -- # keyid=4 00:20:03.650 12:48:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.650 12:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.650 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:20:03.650 12:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.650 12:48:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:03.650 12:48:02 -- nvmf/common.sh@717 -- # local ip 00:20:03.650 12:48:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:03.650 12:48:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:03.650 12:48:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.650 12:48:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.650 12:48:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:03.650 12:48:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.650 12:48:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:03.650 12:48:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:03.650 12:48:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:03.650 12:48:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:03.650 12:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.650 12:48:02 -- common/autotest_common.sh@10 -- # set +x 00:20:04.216 nvme0n1 00:20:04.216 12:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.216 12:48:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.216 12:48:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:04.216 12:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.216 12:48:03 -- common/autotest_common.sh@10 -- # set +x 00:20:04.216 12:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.216 12:48:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.216 12:48:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.216 12:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.216 12:48:03 -- common/autotest_common.sh@10 -- # set +x 00:20:04.216 12:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.216 12:48:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.216 12:48:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:04.216 12:48:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:04.216 12:48:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:04.216 12:48:03 -- host/auth.sh@44 -- # digest=sha256 00:20:04.216 12:48:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.216 12:48:03 -- host/auth.sh@44 -- # keyid=0 00:20:04.216 12:48:03 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:04.216 12:48:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:04.216 12:48:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:04.216 12:48:03 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:04.216 12:48:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:20:04.216 12:48:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:04.216 12:48:03 -- host/auth.sh@68 -- # digest=sha256 00:20:04.216 12:48:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:04.216 12:48:03 -- host/auth.sh@68 -- # keyid=0 00:20:04.216 12:48:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.216 12:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.216 12:48:03 -- common/autotest_common.sh@10 -- # set +x 00:20:04.216 12:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.474 12:48:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:04.474 12:48:03 -- nvmf/common.sh@717 -- # local ip 00:20:04.474 12:48:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:04.474 12:48:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:04.474 12:48:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.474 12:48:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.474 12:48:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:04.474 12:48:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.474 12:48:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:04.474 12:48:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:04.474 12:48:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:04.474 12:48:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:04.474 12:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.474 12:48:03 -- common/autotest_common.sh@10 -- # set +x 00:20:05.406 nvme0n1 00:20:05.406 12:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.406 12:48:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.406 12:48:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:05.406 12:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.406 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:20:05.406 12:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.406 12:48:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.406 12:48:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.406 12:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.406 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:20:05.406 12:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.406 12:48:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:05.406 12:48:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:05.406 12:48:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:05.406 12:48:04 -- host/auth.sh@44 -- # digest=sha256 00:20:05.406 12:48:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.406 12:48:04 -- host/auth.sh@44 -- # keyid=1 00:20:05.406 12:48:04 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:05.406 12:48:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:05.406 12:48:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:05.406 12:48:04 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:05.406 12:48:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:20:05.406 12:48:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:05.406 12:48:04 -- host/auth.sh@68 -- # digest=sha256 00:20:05.406 12:48:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:05.406 12:48:04 -- host/auth.sh@68 -- # keyid=1 00:20:05.406 12:48:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.406 12:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.406 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:20:05.406 12:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.406 12:48:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:05.406 12:48:04 -- nvmf/common.sh@717 -- # local ip 00:20:05.406 12:48:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:05.406 12:48:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:05.406 12:48:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.406 12:48:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.406 12:48:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:05.406 12:48:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.406 12:48:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:05.406 12:48:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:05.406 12:48:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:05.406 12:48:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:05.406 12:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.406 12:48:04 -- common/autotest_common.sh@10 -- # set +x 00:20:06.340 nvme0n1 00:20:06.340 12:48:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.340 12:48:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.340 12:48:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.340 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:20:06.340 12:48:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:06.598 12:48:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.598 12:48:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.598 12:48:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.598 12:48:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.598 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:20:06.598 12:48:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.598 12:48:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:06.598 12:48:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:06.598 12:48:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:06.598 12:48:05 -- host/auth.sh@44 -- # digest=sha256 00:20:06.598 12:48:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.598 12:48:05 -- host/auth.sh@44 -- # keyid=2 00:20:06.598 12:48:05 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:06.598 12:48:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:06.598 12:48:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:06.598 12:48:05 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:06.598 12:48:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:20:06.598 12:48:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:06.598 12:48:05 -- host/auth.sh@68 -- # digest=sha256 00:20:06.598 12:48:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:06.598 12:48:05 -- host/auth.sh@68 -- # keyid=2 00:20:06.598 12:48:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.598 12:48:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.598 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:20:06.598 12:48:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.598 12:48:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:06.598 12:48:05 -- nvmf/common.sh@717 -- # local ip 00:20:06.598 12:48:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:06.598 12:48:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:06.598 12:48:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.598 12:48:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.598 12:48:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:06.598 12:48:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.598 12:48:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:06.598 12:48:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:06.598 12:48:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:06.598 12:48:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:06.598 12:48:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.598 12:48:05 -- common/autotest_common.sh@10 -- # set +x 00:20:07.575 nvme0n1 00:20:07.575 12:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.575 12:48:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.575 12:48:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:07.575 12:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.575 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.575 12:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.575 12:48:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.575 12:48:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.575 12:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.575 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.575 12:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.575 12:48:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:07.575 12:48:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:07.575 12:48:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:07.575 12:48:06 -- host/auth.sh@44 -- # digest=sha256 00:20:07.575 12:48:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.575 12:48:06 -- host/auth.sh@44 -- # keyid=3 00:20:07.575 12:48:06 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:07.575 12:48:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:07.575 12:48:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:07.575 12:48:06 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:07.575 12:48:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:20:07.575 12:48:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:07.575 12:48:06 -- host/auth.sh@68 -- # digest=sha256 00:20:07.575 12:48:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:07.575 12:48:06 -- host/auth.sh@68 -- # keyid=3 00:20:07.575 12:48:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.575 12:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.575 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.575 12:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.575 12:48:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:07.575 12:48:06 -- nvmf/common.sh@717 -- # local ip 00:20:07.575 12:48:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.575 12:48:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.575 12:48:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.575 12:48:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.575 12:48:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:07.575 12:48:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.575 12:48:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:07.575 12:48:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:07.575 12:48:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:07.575 12:48:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:07.575 12:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.575 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:20:08.509 nvme0n1 00:20:08.509 12:48:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.509 12:48:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.509 12:48:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.509 12:48:07 -- common/autotest_common.sh@10 -- # set +x 00:20:08.509 12:48:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:08.509 12:48:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.509 12:48:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.509 12:48:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.509 12:48:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.509 12:48:07 -- common/autotest_common.sh@10 -- # set +x 00:20:08.766 12:48:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.766 12:48:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:08.766 12:48:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:08.766 12:48:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:08.766 12:48:07 -- host/auth.sh@44 -- # digest=sha256 00:20:08.766 12:48:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.766 12:48:07 -- host/auth.sh@44 -- # keyid=4 00:20:08.766 12:48:07 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:08.766 12:48:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:08.766 12:48:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:08.766 12:48:07 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:08.766 12:48:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:20:08.766 12:48:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:08.766 12:48:07 -- host/auth.sh@68 -- # digest=sha256 00:20:08.766 12:48:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:08.766 12:48:07 -- host/auth.sh@68 -- # keyid=4 00:20:08.766 12:48:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.766 12:48:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.766 12:48:07 -- common/autotest_common.sh@10 -- # set +x 00:20:08.766 12:48:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.766 12:48:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:08.766 12:48:07 -- nvmf/common.sh@717 -- # local ip 00:20:08.766 12:48:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:08.766 12:48:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:08.766 12:48:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.766 12:48:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.766 12:48:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:08.766 12:48:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.766 12:48:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:08.766 12:48:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:08.766 12:48:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:08.766 12:48:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.766 12:48:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.766 12:48:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.699 nvme0n1 00:20:09.699 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.699 12:48:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.699 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.699 12:48:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:09.699 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.699 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.699 12:48:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.699 12:48:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.699 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.699 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.699 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.699 12:48:08 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:09.699 12:48:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.699 12:48:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:09.699 12:48:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:09.699 12:48:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:09.699 12:48:08 -- host/auth.sh@44 -- # digest=sha384 00:20:09.699 12:48:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:09.699 12:48:08 -- host/auth.sh@44 -- # keyid=0 00:20:09.699 12:48:08 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:09.699 12:48:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:09.699 12:48:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:09.699 12:48:08 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:09.699 12:48:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:20:09.699 12:48:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:09.699 12:48:08 -- host/auth.sh@68 -- # digest=sha384 00:20:09.699 12:48:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:09.699 12:48:08 -- host/auth.sh@68 -- # keyid=0 00:20:09.699 12:48:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.699 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.699 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.699 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.699 12:48:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:09.699 12:48:08 -- nvmf/common.sh@717 -- # local ip 00:20:09.699 12:48:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:09.699 12:48:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:09.699 12:48:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.699 12:48:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.699 12:48:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:09.699 12:48:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.699 12:48:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:09.699 12:48:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:09.699 12:48:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:09.699 12:48:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:09.699 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.699 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.956 nvme0n1 00:20:09.956 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.956 12:48:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.956 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.956 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.956 12:48:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:09.956 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.956 12:48:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.956 12:48:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.956 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.956 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.956 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.956 12:48:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:09.956 12:48:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:09.956 12:48:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:09.956 12:48:08 -- host/auth.sh@44 -- # digest=sha384 00:20:09.956 12:48:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:09.956 12:48:08 -- host/auth.sh@44 -- # keyid=1 00:20:09.956 12:48:08 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:09.956 12:48:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:09.956 12:48:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:09.956 12:48:08 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:09.956 12:48:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:20:09.956 12:48:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:09.956 12:48:08 -- host/auth.sh@68 -- # digest=sha384 00:20:09.956 12:48:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:09.956 12:48:08 -- host/auth.sh@68 -- # keyid=1 00:20:09.956 12:48:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.956 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.956 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.956 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.956 12:48:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:09.956 12:48:08 -- nvmf/common.sh@717 -- # local ip 00:20:09.956 12:48:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:09.956 12:48:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:09.956 12:48:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.957 12:48:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.957 12:48:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:09.957 12:48:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.957 12:48:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:09.957 12:48:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:09.957 12:48:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:09.957 12:48:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:09.957 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.957 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.957 nvme0n1 00:20:09.957 12:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.957 12:48:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.957 12:48:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:09.957 12:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.957 12:48:08 -- common/autotest_common.sh@10 -- # set +x 00:20:09.957 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.214 12:48:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.214 12:48:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.214 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.214 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.215 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.215 12:48:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:10.215 12:48:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:10.215 12:48:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.215 12:48:09 -- host/auth.sh@44 -- # digest=sha384 00:20:10.215 12:48:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.215 12:48:09 -- host/auth.sh@44 -- # keyid=2 00:20:10.215 12:48:09 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:10.215 12:48:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:10.215 12:48:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:10.215 12:48:09 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:10.215 12:48:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:20:10.215 12:48:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:10.215 12:48:09 -- host/auth.sh@68 -- # digest=sha384 00:20:10.215 12:48:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:10.215 12:48:09 -- host/auth.sh@68 -- # keyid=2 00:20:10.215 12:48:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.215 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.215 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.215 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.215 12:48:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:10.215 12:48:09 -- nvmf/common.sh@717 -- # local ip 00:20:10.215 12:48:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.215 12:48:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.215 12:48:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.215 12:48:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.215 12:48:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.215 12:48:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.215 12:48:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.215 12:48:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.215 12:48:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.215 12:48:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:10.215 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.215 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.215 nvme0n1 00:20:10.215 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.215 12:48:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.215 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.215 12:48:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:10.215 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.215 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.215 12:48:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.215 12:48:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.215 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.215 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.215 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.215 12:48:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:10.215 12:48:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:10.215 12:48:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.215 12:48:09 -- host/auth.sh@44 -- # digest=sha384 00:20:10.215 12:48:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.215 12:48:09 -- host/auth.sh@44 -- # keyid=3 00:20:10.215 12:48:09 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:10.215 12:48:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:10.215 12:48:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:10.215 12:48:09 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:10.215 12:48:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:20:10.215 12:48:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:10.215 12:48:09 -- host/auth.sh@68 -- # digest=sha384 00:20:10.215 12:48:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:10.215 12:48:09 -- host/auth.sh@68 -- # keyid=3 00:20:10.215 12:48:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.215 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.215 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.215 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.215 12:48:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:10.215 12:48:09 -- nvmf/common.sh@717 -- # local ip 00:20:10.215 12:48:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.215 12:48:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.215 12:48:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.215 12:48:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.215 12:48:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.215 12:48:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.215 12:48:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.215 12:48:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.215 12:48:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.215 12:48:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:10.215 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.215 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.473 nvme0n1 00:20:10.473 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.473 12:48:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.473 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.473 12:48:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:10.473 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.473 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.473 12:48:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.473 12:48:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.473 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.473 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.473 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.473 12:48:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:10.473 12:48:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:10.473 12:48:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.473 12:48:09 -- host/auth.sh@44 -- # digest=sha384 00:20:10.473 12:48:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.473 12:48:09 -- host/auth.sh@44 -- # keyid=4 00:20:10.473 12:48:09 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:10.473 12:48:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:10.473 12:48:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:10.473 12:48:09 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:10.473 12:48:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:20:10.473 12:48:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:10.473 12:48:09 -- host/auth.sh@68 -- # digest=sha384 00:20:10.473 12:48:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:10.473 12:48:09 -- host/auth.sh@68 -- # keyid=4 00:20:10.473 12:48:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.473 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.473 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.473 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.473 12:48:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:10.473 12:48:09 -- nvmf/common.sh@717 -- # local ip 00:20:10.473 12:48:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.473 12:48:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.473 12:48:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.473 12:48:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.473 12:48:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.473 12:48:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.473 12:48:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.473 12:48:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.473 12:48:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.473 12:48:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:10.473 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.473 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.731 nvme0n1 00:20:10.731 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.731 12:48:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.731 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.731 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.731 12:48:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:10.731 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.731 12:48:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.731 12:48:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.731 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.731 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.731 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.731 12:48:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.731 12:48:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:10.731 12:48:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:10.731 12:48:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.731 12:48:09 -- host/auth.sh@44 -- # digest=sha384 00:20:10.731 12:48:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:10.731 12:48:09 -- host/auth.sh@44 -- # keyid=0 00:20:10.731 12:48:09 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:10.731 12:48:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:10.731 12:48:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:10.731 12:48:09 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:10.731 12:48:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:20:10.731 12:48:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:10.731 12:48:09 -- host/auth.sh@68 -- # digest=sha384 00:20:10.731 12:48:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:10.731 12:48:09 -- host/auth.sh@68 -- # keyid=0 00:20:10.731 12:48:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.731 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.731 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.731 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.731 12:48:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:10.731 12:48:09 -- nvmf/common.sh@717 -- # local ip 00:20:10.731 12:48:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.731 12:48:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.731 12:48:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.731 12:48:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.731 12:48:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.731 12:48:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.731 12:48:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.731 12:48:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.731 12:48:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.731 12:48:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:10.731 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.731 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.990 nvme0n1 00:20:10.990 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.990 12:48:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.990 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.990 12:48:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:10.990 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.990 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.990 12:48:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.990 12:48:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.990 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.990 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.990 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.990 12:48:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:10.990 12:48:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:10.990 12:48:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.990 12:48:09 -- host/auth.sh@44 -- # digest=sha384 00:20:10.990 12:48:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:10.990 12:48:09 -- host/auth.sh@44 -- # keyid=1 00:20:10.990 12:48:09 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:10.990 12:48:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:10.990 12:48:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:10.990 12:48:09 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:10.990 12:48:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:20:10.990 12:48:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:10.990 12:48:09 -- host/auth.sh@68 -- # digest=sha384 00:20:10.990 12:48:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:10.990 12:48:09 -- host/auth.sh@68 -- # keyid=1 00:20:10.990 12:48:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.990 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.990 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.990 12:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.990 12:48:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:10.990 12:48:09 -- nvmf/common.sh@717 -- # local ip 00:20:10.990 12:48:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.990 12:48:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.990 12:48:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.990 12:48:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.990 12:48:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.990 12:48:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.990 12:48:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.990 12:48:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.990 12:48:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.990 12:48:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:10.990 12:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.990 12:48:09 -- common/autotest_common.sh@10 -- # set +x 00:20:11.247 nvme0n1 00:20:11.247 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.247 12:48:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:11.247 12:48:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.247 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.247 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.247 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.247 12:48:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.247 12:48:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.247 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.247 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.248 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.248 12:48:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:11.248 12:48:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:11.248 12:48:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:11.248 12:48:10 -- host/auth.sh@44 -- # digest=sha384 00:20:11.248 12:48:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.248 12:48:10 -- host/auth.sh@44 -- # keyid=2 00:20:11.248 12:48:10 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:11.248 12:48:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:11.248 12:48:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:11.248 12:48:10 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:11.248 12:48:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:20:11.248 12:48:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:11.248 12:48:10 -- host/auth.sh@68 -- # digest=sha384 00:20:11.248 12:48:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:11.248 12:48:10 -- host/auth.sh@68 -- # keyid=2 00:20:11.248 12:48:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.248 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.248 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.248 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.248 12:48:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:11.248 12:48:10 -- nvmf/common.sh@717 -- # local ip 00:20:11.248 12:48:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:11.248 12:48:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:11.248 12:48:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.248 12:48:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.248 12:48:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:11.248 12:48:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.248 12:48:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:11.248 12:48:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:11.248 12:48:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:11.248 12:48:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:11.248 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.248 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.505 nvme0n1 00:20:11.505 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.505 12:48:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.505 12:48:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:11.505 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.505 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.505 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.505 12:48:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.506 12:48:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.506 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.506 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.506 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.506 12:48:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:11.506 12:48:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:11.506 12:48:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:11.506 12:48:10 -- host/auth.sh@44 -- # digest=sha384 00:20:11.506 12:48:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.506 12:48:10 -- host/auth.sh@44 -- # keyid=3 00:20:11.506 12:48:10 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:11.506 12:48:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:11.506 12:48:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:11.506 12:48:10 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:11.506 12:48:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:20:11.506 12:48:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:11.506 12:48:10 -- host/auth.sh@68 -- # digest=sha384 00:20:11.506 12:48:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:11.506 12:48:10 -- host/auth.sh@68 -- # keyid=3 00:20:11.506 12:48:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.506 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.506 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.506 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.506 12:48:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:11.506 12:48:10 -- nvmf/common.sh@717 -- # local ip 00:20:11.506 12:48:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:11.506 12:48:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:11.506 12:48:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.506 12:48:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.506 12:48:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:11.506 12:48:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.506 12:48:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:11.506 12:48:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:11.506 12:48:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:11.506 12:48:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:11.506 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.506 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.763 nvme0n1 00:20:11.763 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.763 12:48:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.763 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.763 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.763 12:48:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:11.763 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.763 12:48:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.763 12:48:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.763 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.763 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.763 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.763 12:48:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:11.763 12:48:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:11.763 12:48:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:11.764 12:48:10 -- host/auth.sh@44 -- # digest=sha384 00:20:11.764 12:48:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.764 12:48:10 -- host/auth.sh@44 -- # keyid=4 00:20:11.764 12:48:10 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:11.764 12:48:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:11.764 12:48:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:11.764 12:48:10 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:11.764 12:48:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:20:11.764 12:48:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:11.764 12:48:10 -- host/auth.sh@68 -- # digest=sha384 00:20:11.764 12:48:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:11.764 12:48:10 -- host/auth.sh@68 -- # keyid=4 00:20:11.764 12:48:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.764 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.764 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:11.764 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.764 12:48:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:11.764 12:48:10 -- nvmf/common.sh@717 -- # local ip 00:20:11.764 12:48:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:11.764 12:48:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:11.764 12:48:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.764 12:48:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.764 12:48:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:11.764 12:48:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.764 12:48:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:11.764 12:48:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:11.764 12:48:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:11.764 12:48:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.764 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.764 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.021 nvme0n1 00:20:12.021 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.021 12:48:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.021 12:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.022 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 12:48:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:12.022 12:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.022 12:48:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.022 12:48:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.022 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.022 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.022 12:48:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.022 12:48:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:12.022 12:48:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:12.022 12:48:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:12.022 12:48:11 -- host/auth.sh@44 -- # digest=sha384 00:20:12.022 12:48:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.022 12:48:11 -- host/auth.sh@44 -- # keyid=0 00:20:12.022 12:48:11 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:12.022 12:48:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:12.022 12:48:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:12.022 12:48:11 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:12.022 12:48:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:20:12.022 12:48:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:12.022 12:48:11 -- host/auth.sh@68 -- # digest=sha384 00:20:12.022 12:48:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:12.022 12:48:11 -- host/auth.sh@68 -- # keyid=0 00:20:12.022 12:48:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.022 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.022 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.022 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.022 12:48:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:12.022 12:48:11 -- nvmf/common.sh@717 -- # local ip 00:20:12.022 12:48:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.022 12:48:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.022 12:48:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.022 12:48:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.022 12:48:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:12.022 12:48:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.022 12:48:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:12.022 12:48:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:12.022 12:48:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:12.022 12:48:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:12.022 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.022 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.280 nvme0n1 00:20:12.280 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.280 12:48:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.280 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.280 12:48:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:12.280 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.280 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.538 12:48:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.538 12:48:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.538 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.538 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.538 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.538 12:48:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:12.538 12:48:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:12.538 12:48:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:12.538 12:48:11 -- host/auth.sh@44 -- # digest=sha384 00:20:12.538 12:48:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.538 12:48:11 -- host/auth.sh@44 -- # keyid=1 00:20:12.538 12:48:11 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:12.538 12:48:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:12.538 12:48:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:12.538 12:48:11 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:12.538 12:48:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:20:12.538 12:48:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:12.538 12:48:11 -- host/auth.sh@68 -- # digest=sha384 00:20:12.538 12:48:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:12.538 12:48:11 -- host/auth.sh@68 -- # keyid=1 00:20:12.538 12:48:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.538 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.538 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.538 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.538 12:48:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:12.538 12:48:11 -- nvmf/common.sh@717 -- # local ip 00:20:12.538 12:48:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.538 12:48:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.538 12:48:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.538 12:48:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.538 12:48:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:12.538 12:48:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.538 12:48:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:12.538 12:48:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:12.538 12:48:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:12.538 12:48:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:12.538 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.538 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.796 nvme0n1 00:20:12.796 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.796 12:48:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.796 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.796 12:48:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:12.796 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.796 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.796 12:48:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.796 12:48:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.796 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.796 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.796 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.796 12:48:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:12.796 12:48:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:12.796 12:48:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:12.796 12:48:11 -- host/auth.sh@44 -- # digest=sha384 00:20:12.796 12:48:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.796 12:48:11 -- host/auth.sh@44 -- # keyid=2 00:20:12.796 12:48:11 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:12.796 12:48:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:12.796 12:48:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:12.796 12:48:11 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:12.796 12:48:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:20:12.796 12:48:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:12.796 12:48:11 -- host/auth.sh@68 -- # digest=sha384 00:20:12.796 12:48:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:12.796 12:48:11 -- host/auth.sh@68 -- # keyid=2 00:20:12.796 12:48:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.796 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.796 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:12.796 12:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.796 12:48:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:12.796 12:48:11 -- nvmf/common.sh@717 -- # local ip 00:20:12.796 12:48:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.796 12:48:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.796 12:48:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.796 12:48:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.796 12:48:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:12.796 12:48:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.796 12:48:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:12.796 12:48:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:12.796 12:48:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:12.796 12:48:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:12.796 12:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.796 12:48:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.054 nvme0n1 00:20:13.054 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.054 12:48:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.054 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.054 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.054 12:48:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:13.054 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.054 12:48:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.054 12:48:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.054 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.054 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.311 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.311 12:48:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:13.311 12:48:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:13.311 12:48:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:13.311 12:48:12 -- host/auth.sh@44 -- # digest=sha384 00:20:13.311 12:48:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:13.311 12:48:12 -- host/auth.sh@44 -- # keyid=3 00:20:13.311 12:48:12 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:13.311 12:48:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:13.311 12:48:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:13.311 12:48:12 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:13.311 12:48:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:20:13.311 12:48:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:13.311 12:48:12 -- host/auth.sh@68 -- # digest=sha384 00:20:13.311 12:48:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:13.312 12:48:12 -- host/auth.sh@68 -- # keyid=3 00:20:13.312 12:48:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.312 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.312 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.312 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.312 12:48:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:13.312 12:48:12 -- nvmf/common.sh@717 -- # local ip 00:20:13.312 12:48:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:13.312 12:48:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:13.312 12:48:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.312 12:48:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.312 12:48:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:13.312 12:48:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.312 12:48:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:13.312 12:48:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:13.312 12:48:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:13.312 12:48:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:13.312 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.312 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.570 nvme0n1 00:20:13.570 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.570 12:48:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.570 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.570 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.570 12:48:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:13.570 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.570 12:48:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.570 12:48:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.570 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.570 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.570 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.570 12:48:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:13.570 12:48:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:13.570 12:48:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:13.570 12:48:12 -- host/auth.sh@44 -- # digest=sha384 00:20:13.570 12:48:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:13.570 12:48:12 -- host/auth.sh@44 -- # keyid=4 00:20:13.570 12:48:12 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:13.570 12:48:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:13.570 12:48:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:13.570 12:48:12 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:13.570 12:48:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:20:13.570 12:48:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:13.570 12:48:12 -- host/auth.sh@68 -- # digest=sha384 00:20:13.570 12:48:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:13.570 12:48:12 -- host/auth.sh@68 -- # keyid=4 00:20:13.570 12:48:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.570 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.570 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.570 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.570 12:48:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:13.570 12:48:12 -- nvmf/common.sh@717 -- # local ip 00:20:13.570 12:48:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:13.570 12:48:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:13.570 12:48:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.570 12:48:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.570 12:48:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:13.570 12:48:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.570 12:48:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:13.570 12:48:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:13.570 12:48:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:13.570 12:48:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:13.570 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.570 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.828 nvme0n1 00:20:13.828 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.828 12:48:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.828 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.828 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.828 12:48:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.086 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.086 12:48:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.086 12:48:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.086 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.086 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:14.086 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.086 12:48:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.086 12:48:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.086 12:48:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:14.086 12:48:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.086 12:48:12 -- host/auth.sh@44 -- # digest=sha384 00:20:14.086 12:48:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:14.086 12:48:12 -- host/auth.sh@44 -- # keyid=0 00:20:14.086 12:48:12 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:14.086 12:48:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:14.086 12:48:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:14.086 12:48:12 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:14.086 12:48:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:20:14.086 12:48:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.086 12:48:12 -- host/auth.sh@68 -- # digest=sha384 00:20:14.086 12:48:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:14.086 12:48:12 -- host/auth.sh@68 -- # keyid=0 00:20:14.086 12:48:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.086 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.086 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:14.086 12:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.086 12:48:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.086 12:48:12 -- nvmf/common.sh@717 -- # local ip 00:20:14.086 12:48:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.086 12:48:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.086 12:48:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.086 12:48:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.086 12:48:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:14.086 12:48:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.086 12:48:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:14.086 12:48:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:14.086 12:48:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:14.086 12:48:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:14.086 12:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.086 12:48:12 -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 nvme0n1 00:20:14.652 12:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.652 12:48:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.652 12:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.652 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 12:48:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.652 12:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.652 12:48:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.652 12:48:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.652 12:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.652 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 12:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.652 12:48:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.652 12:48:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:14.652 12:48:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.652 12:48:13 -- host/auth.sh@44 -- # digest=sha384 00:20:14.652 12:48:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:14.652 12:48:13 -- host/auth.sh@44 -- # keyid=1 00:20:14.652 12:48:13 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:14.652 12:48:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:14.652 12:48:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:14.652 12:48:13 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:14.652 12:48:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:20:14.652 12:48:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.652 12:48:13 -- host/auth.sh@68 -- # digest=sha384 00:20:14.652 12:48:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:14.652 12:48:13 -- host/auth.sh@68 -- # keyid=1 00:20:14.652 12:48:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.652 12:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.652 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:14.652 12:48:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.652 12:48:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.652 12:48:13 -- nvmf/common.sh@717 -- # local ip 00:20:14.652 12:48:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.652 12:48:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.652 12:48:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.652 12:48:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.652 12:48:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:14.652 12:48:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.652 12:48:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:14.652 12:48:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:14.652 12:48:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:14.652 12:48:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:14.652 12:48:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.652 12:48:13 -- common/autotest_common.sh@10 -- # set +x 00:20:15.217 nvme0n1 00:20:15.217 12:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.217 12:48:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.217 12:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.217 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:15.217 12:48:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.217 12:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.217 12:48:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.217 12:48:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.217 12:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.217 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:15.217 12:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.217 12:48:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.217 12:48:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:15.217 12:48:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.217 12:48:14 -- host/auth.sh@44 -- # digest=sha384 00:20:15.217 12:48:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:15.217 12:48:14 -- host/auth.sh@44 -- # keyid=2 00:20:15.217 12:48:14 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:15.217 12:48:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:15.217 12:48:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:15.217 12:48:14 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:15.217 12:48:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:20:15.217 12:48:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.217 12:48:14 -- host/auth.sh@68 -- # digest=sha384 00:20:15.217 12:48:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:15.217 12:48:14 -- host/auth.sh@68 -- # keyid=2 00:20:15.217 12:48:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.217 12:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.217 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:15.217 12:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.217 12:48:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.217 12:48:14 -- nvmf/common.sh@717 -- # local ip 00:20:15.217 12:48:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.217 12:48:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.217 12:48:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.217 12:48:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.217 12:48:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:15.217 12:48:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.217 12:48:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:15.217 12:48:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:15.217 12:48:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:15.217 12:48:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:15.217 12:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.217 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:15.785 nvme0n1 00:20:15.785 12:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.785 12:48:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.785 12:48:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.785 12:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.785 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:15.785 12:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.057 12:48:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.057 12:48:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.057 12:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.057 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:16.057 12:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.057 12:48:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:16.057 12:48:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:16.057 12:48:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:16.057 12:48:14 -- host/auth.sh@44 -- # digest=sha384 00:20:16.057 12:48:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:16.057 12:48:14 -- host/auth.sh@44 -- # keyid=3 00:20:16.057 12:48:14 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:16.057 12:48:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:16.057 12:48:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:16.058 12:48:14 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:16.058 12:48:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:20:16.058 12:48:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:16.058 12:48:14 -- host/auth.sh@68 -- # digest=sha384 00:20:16.058 12:48:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:16.058 12:48:14 -- host/auth.sh@68 -- # keyid=3 00:20:16.058 12:48:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.058 12:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.058 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:16.058 12:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.058 12:48:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:16.058 12:48:14 -- nvmf/common.sh@717 -- # local ip 00:20:16.058 12:48:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.058 12:48:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.058 12:48:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.058 12:48:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.058 12:48:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:16.058 12:48:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.058 12:48:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:16.058 12:48:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:16.058 12:48:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:16.058 12:48:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:16.058 12:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.058 12:48:14 -- common/autotest_common.sh@10 -- # set +x 00:20:16.623 nvme0n1 00:20:16.623 12:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.623 12:48:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.623 12:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.623 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:20:16.623 12:48:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:16.623 12:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.623 12:48:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.623 12:48:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.623 12:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.623 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:20:16.623 12:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.623 12:48:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:16.623 12:48:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:16.623 12:48:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:16.623 12:48:15 -- host/auth.sh@44 -- # digest=sha384 00:20:16.623 12:48:15 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:16.623 12:48:15 -- host/auth.sh@44 -- # keyid=4 00:20:16.623 12:48:15 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:16.623 12:48:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:16.623 12:48:15 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:16.623 12:48:15 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:16.623 12:48:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:20:16.624 12:48:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:16.624 12:48:15 -- host/auth.sh@68 -- # digest=sha384 00:20:16.624 12:48:15 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:16.624 12:48:15 -- host/auth.sh@68 -- # keyid=4 00:20:16.624 12:48:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:16.624 12:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.624 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:20:16.624 12:48:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.624 12:48:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:16.624 12:48:15 -- nvmf/common.sh@717 -- # local ip 00:20:16.624 12:48:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.624 12:48:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.624 12:48:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.624 12:48:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.624 12:48:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:16.624 12:48:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.624 12:48:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:16.624 12:48:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:16.624 12:48:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:16.624 12:48:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:16.624 12:48:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.624 12:48:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 nvme0n1 00:20:17.189 12:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.189 12:48:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.189 12:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.189 12:48:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.189 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 12:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.189 12:48:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.189 12:48:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.189 12:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.189 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 12:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.189 12:48:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.189 12:48:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:17.189 12:48:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:17.189 12:48:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:17.189 12:48:16 -- host/auth.sh@44 -- # digest=sha384 00:20:17.189 12:48:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:17.189 12:48:16 -- host/auth.sh@44 -- # keyid=0 00:20:17.189 12:48:16 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:17.189 12:48:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:17.189 12:48:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:17.189 12:48:16 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:17.189 12:48:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:20:17.189 12:48:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:17.189 12:48:16 -- host/auth.sh@68 -- # digest=sha384 00:20:17.189 12:48:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:17.189 12:48:16 -- host/auth.sh@68 -- # keyid=0 00:20:17.189 12:48:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.189 12:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.189 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 12:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.189 12:48:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:17.189 12:48:16 -- nvmf/common.sh@717 -- # local ip 00:20:17.189 12:48:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:17.189 12:48:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:17.189 12:48:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.189 12:48:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.189 12:48:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:17.189 12:48:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.189 12:48:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:17.189 12:48:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:17.189 12:48:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:17.189 12:48:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:17.189 12:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.189 12:48:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.122 nvme0n1 00:20:18.122 12:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.122 12:48:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.122 12:48:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:18.122 12:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.122 12:48:17 -- common/autotest_common.sh@10 -- # set +x 00:20:18.122 12:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 12:48:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.380 12:48:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.380 12:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 12:48:17 -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 12:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 12:48:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:18.380 12:48:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:18.380 12:48:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:18.380 12:48:17 -- host/auth.sh@44 -- # digest=sha384 00:20:18.380 12:48:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:18.380 12:48:17 -- host/auth.sh@44 -- # keyid=1 00:20:18.380 12:48:17 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:18.380 12:48:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:18.380 12:48:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:18.380 12:48:17 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:18.380 12:48:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:20:18.380 12:48:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:18.380 12:48:17 -- host/auth.sh@68 -- # digest=sha384 00:20:18.380 12:48:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:18.380 12:48:17 -- host/auth.sh@68 -- # keyid=1 00:20:18.380 12:48:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.380 12:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 12:48:17 -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 12:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 12:48:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:18.380 12:48:17 -- nvmf/common.sh@717 -- # local ip 00:20:18.380 12:48:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:18.380 12:48:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:18.380 12:48:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.380 12:48:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.380 12:48:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:18.380 12:48:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.380 12:48:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:18.380 12:48:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:18.380 12:48:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:18.380 12:48:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:18.380 12:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 12:48:17 -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 nvme0n1 00:20:19.313 12:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.313 12:48:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.313 12:48:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.313 12:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.313 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 12:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.313 12:48:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.313 12:48:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.313 12:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.313 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 12:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.313 12:48:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.314 12:48:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:19.314 12:48:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.314 12:48:18 -- host/auth.sh@44 -- # digest=sha384 00:20:19.314 12:48:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:19.314 12:48:18 -- host/auth.sh@44 -- # keyid=2 00:20:19.314 12:48:18 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:19.314 12:48:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:19.314 12:48:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:19.314 12:48:18 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:19.314 12:48:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:20:19.314 12:48:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.314 12:48:18 -- host/auth.sh@68 -- # digest=sha384 00:20:19.314 12:48:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:19.314 12:48:18 -- host/auth.sh@68 -- # keyid=2 00:20:19.314 12:48:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:19.314 12:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.314 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.314 12:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.314 12:48:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.314 12:48:18 -- nvmf/common.sh@717 -- # local ip 00:20:19.314 12:48:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.314 12:48:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.314 12:48:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.314 12:48:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.314 12:48:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:19.314 12:48:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.314 12:48:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:19.314 12:48:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:19.314 12:48:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:19.314 12:48:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:19.314 12:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.314 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 nvme0n1 00:20:20.249 12:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.249 12:48:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.249 12:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.249 12:48:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.249 12:48:19 -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 12:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.249 12:48:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.249 12:48:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.249 12:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.249 12:48:19 -- common/autotest_common.sh@10 -- # set +x 00:20:20.508 12:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.508 12:48:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.508 12:48:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:20.508 12:48:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.508 12:48:19 -- host/auth.sh@44 -- # digest=sha384 00:20:20.508 12:48:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.508 12:48:19 -- host/auth.sh@44 -- # keyid=3 00:20:20.508 12:48:19 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:20.508 12:48:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:20.508 12:48:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:20.508 12:48:19 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:20.508 12:48:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:20:20.508 12:48:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.508 12:48:19 -- host/auth.sh@68 -- # digest=sha384 00:20:20.508 12:48:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:20.508 12:48:19 -- host/auth.sh@68 -- # keyid=3 00:20:20.508 12:48:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.508 12:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.508 12:48:19 -- common/autotest_common.sh@10 -- # set +x 00:20:20.508 12:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.508 12:48:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.508 12:48:19 -- nvmf/common.sh@717 -- # local ip 00:20:20.508 12:48:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.508 12:48:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.508 12:48:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.508 12:48:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.508 12:48:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.508 12:48:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.508 12:48:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.508 12:48:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.508 12:48:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.508 12:48:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:20.508 12:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.508 12:48:19 -- common/autotest_common.sh@10 -- # set +x 00:20:21.441 nvme0n1 00:20:21.441 12:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.441 12:48:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.441 12:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.441 12:48:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.441 12:48:20 -- common/autotest_common.sh@10 -- # set +x 00:20:21.441 12:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.441 12:48:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.441 12:48:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.441 12:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.441 12:48:20 -- common/autotest_common.sh@10 -- # set +x 00:20:21.441 12:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.441 12:48:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.441 12:48:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:21.441 12:48:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.441 12:48:20 -- host/auth.sh@44 -- # digest=sha384 00:20:21.441 12:48:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.441 12:48:20 -- host/auth.sh@44 -- # keyid=4 00:20:21.441 12:48:20 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:21.441 12:48:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:21.441 12:48:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:21.441 12:48:20 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:21.441 12:48:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:20:21.441 12:48:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.441 12:48:20 -- host/auth.sh@68 -- # digest=sha384 00:20:21.441 12:48:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:21.441 12:48:20 -- host/auth.sh@68 -- # keyid=4 00:20:21.441 12:48:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:21.441 12:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.441 12:48:20 -- common/autotest_common.sh@10 -- # set +x 00:20:21.441 12:48:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.441 12:48:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.441 12:48:20 -- nvmf/common.sh@717 -- # local ip 00:20:21.441 12:48:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.441 12:48:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.441 12:48:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.441 12:48:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.441 12:48:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.441 12:48:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.441 12:48:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.441 12:48:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.441 12:48:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.441 12:48:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:21.441 12:48:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.441 12:48:20 -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 nvme0n1 00:20:22.813 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.813 12:48:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.813 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.813 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.813 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.813 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:22.813 12:48:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.813 12:48:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.813 12:48:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:22.813 12:48:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.813 12:48:21 -- host/auth.sh@44 -- # digest=sha512 00:20:22.813 12:48:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.813 12:48:21 -- host/auth.sh@44 -- # keyid=0 00:20:22.813 12:48:21 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:22.813 12:48:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:22.813 12:48:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:22.813 12:48:21 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:22.813 12:48:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:20:22.813 12:48:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:22.813 12:48:21 -- host/auth.sh@68 -- # digest=sha512 00:20:22.813 12:48:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:22.813 12:48:21 -- host/auth.sh@68 -- # keyid=0 00:20:22.813 12:48:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.813 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.813 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:22.813 12:48:21 -- nvmf/common.sh@717 -- # local ip 00:20:22.813 12:48:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:22.813 12:48:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:22.813 12:48:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.813 12:48:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.813 12:48:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:22.813 12:48:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.813 12:48:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:22.813 12:48:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:22.813 12:48:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:22.813 12:48:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:22.813 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.813 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 nvme0n1 00:20:22.813 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.813 12:48:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.813 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.813 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.813 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.813 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.813 12:48:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:22.813 12:48:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.813 12:48:21 -- host/auth.sh@44 -- # digest=sha512 00:20:22.813 12:48:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.813 12:48:21 -- host/auth.sh@44 -- # keyid=1 00:20:22.813 12:48:21 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:22.813 12:48:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:22.813 12:48:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:22.813 12:48:21 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:22.813 12:48:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:20:22.813 12:48:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:22.813 12:48:21 -- host/auth.sh@68 -- # digest=sha512 00:20:22.813 12:48:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:22.813 12:48:21 -- host/auth.sh@68 -- # keyid=1 00:20:22.813 12:48:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.813 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.813 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.813 12:48:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:22.813 12:48:21 -- nvmf/common.sh@717 -- # local ip 00:20:22.813 12:48:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:22.813 12:48:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:22.813 12:48:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.813 12:48:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.813 12:48:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:22.813 12:48:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.813 12:48:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:22.813 12:48:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:22.813 12:48:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:22.813 12:48:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:22.813 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.813 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:23.071 nvme0n1 00:20:23.071 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.071 12:48:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.071 12:48:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:23.071 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.071 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:23.071 12:48:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.071 12:48:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.071 12:48:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.071 12:48:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.071 12:48:21 -- common/autotest_common.sh@10 -- # set +x 00:20:23.071 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.071 12:48:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.071 12:48:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:23.071 12:48:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.071 12:48:22 -- host/auth.sh@44 -- # digest=sha512 00:20:23.071 12:48:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.071 12:48:22 -- host/auth.sh@44 -- # keyid=2 00:20:23.071 12:48:22 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:23.071 12:48:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:23.071 12:48:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:23.071 12:48:22 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:23.071 12:48:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:20:23.071 12:48:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.071 12:48:22 -- host/auth.sh@68 -- # digest=sha512 00:20:23.071 12:48:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:23.071 12:48:22 -- host/auth.sh@68 -- # keyid=2 00:20:23.071 12:48:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.071 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.071 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.071 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.071 12:48:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.071 12:48:22 -- nvmf/common.sh@717 -- # local ip 00:20:23.071 12:48:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.071 12:48:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.071 12:48:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.071 12:48:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.071 12:48:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:23.071 12:48:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.071 12:48:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:23.071 12:48:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:23.071 12:48:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:23.071 12:48:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:23.071 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.071 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.329 nvme0n1 00:20:23.329 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.329 12:48:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.329 12:48:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:23.329 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.329 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.329 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.329 12:48:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.329 12:48:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.329 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.329 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.329 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.329 12:48:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.329 12:48:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:23.329 12:48:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.329 12:48:22 -- host/auth.sh@44 -- # digest=sha512 00:20:23.329 12:48:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.329 12:48:22 -- host/auth.sh@44 -- # keyid=3 00:20:23.329 12:48:22 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:23.329 12:48:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:23.329 12:48:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:23.329 12:48:22 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:23.329 12:48:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:20:23.329 12:48:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.329 12:48:22 -- host/auth.sh@68 -- # digest=sha512 00:20:23.329 12:48:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:23.329 12:48:22 -- host/auth.sh@68 -- # keyid=3 00:20:23.329 12:48:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.329 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.329 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.329 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.329 12:48:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.329 12:48:22 -- nvmf/common.sh@717 -- # local ip 00:20:23.329 12:48:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.329 12:48:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.329 12:48:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.329 12:48:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.329 12:48:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:23.329 12:48:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.329 12:48:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:23.329 12:48:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:23.329 12:48:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:23.329 12:48:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:23.329 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.329 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.587 nvme0n1 00:20:23.587 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.587 12:48:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.587 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.587 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.587 12:48:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:23.587 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.587 12:48:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.587 12:48:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.587 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.587 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.587 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.587 12:48:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.587 12:48:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:23.587 12:48:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.587 12:48:22 -- host/auth.sh@44 -- # digest=sha512 00:20:23.587 12:48:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.587 12:48:22 -- host/auth.sh@44 -- # keyid=4 00:20:23.587 12:48:22 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:23.587 12:48:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:23.587 12:48:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:23.587 12:48:22 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:23.587 12:48:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:20:23.587 12:48:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.587 12:48:22 -- host/auth.sh@68 -- # digest=sha512 00:20:23.587 12:48:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:23.587 12:48:22 -- host/auth.sh@68 -- # keyid=4 00:20:23.587 12:48:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.587 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.587 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.587 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.587 12:48:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.587 12:48:22 -- nvmf/common.sh@717 -- # local ip 00:20:23.587 12:48:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.587 12:48:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.587 12:48:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.587 12:48:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.587 12:48:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:23.587 12:48:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.587 12:48:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:23.587 12:48:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:23.587 12:48:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:23.587 12:48:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:23.587 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.587 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.587 nvme0n1 00:20:23.587 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.587 12:48:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.587 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.587 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.587 12:48:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:23.587 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.844 12:48:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.844 12:48:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.844 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.844 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.845 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.845 12:48:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.845 12:48:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.845 12:48:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:23.845 12:48:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.845 12:48:22 -- host/auth.sh@44 -- # digest=sha512 00:20:23.845 12:48:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.845 12:48:22 -- host/auth.sh@44 -- # keyid=0 00:20:23.845 12:48:22 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:23.845 12:48:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:23.845 12:48:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:23.845 12:48:22 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:23.845 12:48:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:20:23.845 12:48:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.845 12:48:22 -- host/auth.sh@68 -- # digest=sha512 00:20:23.845 12:48:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:23.845 12:48:22 -- host/auth.sh@68 -- # keyid=0 00:20:23.845 12:48:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.845 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.845 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.845 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.845 12:48:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.845 12:48:22 -- nvmf/common.sh@717 -- # local ip 00:20:23.845 12:48:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.845 12:48:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.845 12:48:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.845 12:48:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.845 12:48:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:23.845 12:48:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.845 12:48:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:23.845 12:48:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:23.845 12:48:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:23.845 12:48:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:23.845 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.845 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.102 nvme0n1 00:20:24.102 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.102 12:48:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.102 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.102 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.102 12:48:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.102 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.102 12:48:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.102 12:48:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.102 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.102 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.102 12:48:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.102 12:48:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:24.102 12:48:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:24.102 12:48:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:24.102 12:48:22 -- host/auth.sh@44 -- # digest=sha512 00:20:24.102 12:48:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.102 12:48:22 -- host/auth.sh@44 -- # keyid=1 00:20:24.102 12:48:22 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:24.102 12:48:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:24.102 12:48:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:24.102 12:48:22 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:24.102 12:48:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:20:24.102 12:48:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.102 12:48:22 -- host/auth.sh@68 -- # digest=sha512 00:20:24.102 12:48:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:24.102 12:48:22 -- host/auth.sh@68 -- # keyid=1 00:20:24.102 12:48:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.102 12:48:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.102 12:48:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.102 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.102 12:48:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.102 12:48:23 -- nvmf/common.sh@717 -- # local ip 00:20:24.102 12:48:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.102 12:48:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.102 12:48:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.102 12:48:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.102 12:48:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:24.102 12:48:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.102 12:48:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:24.103 12:48:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:24.103 12:48:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:24.103 12:48:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:24.103 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.103 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.360 nvme0n1 00:20:24.360 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.360 12:48:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.360 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.360 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.360 12:48:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.360 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.361 12:48:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.361 12:48:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.361 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.361 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.361 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.361 12:48:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:24.361 12:48:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:24.361 12:48:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:24.361 12:48:23 -- host/auth.sh@44 -- # digest=sha512 00:20:24.361 12:48:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.361 12:48:23 -- host/auth.sh@44 -- # keyid=2 00:20:24.361 12:48:23 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:24.361 12:48:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:24.361 12:48:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:24.361 12:48:23 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:24.361 12:48:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:20:24.361 12:48:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.361 12:48:23 -- host/auth.sh@68 -- # digest=sha512 00:20:24.361 12:48:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:24.361 12:48:23 -- host/auth.sh@68 -- # keyid=2 00:20:24.361 12:48:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.361 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.361 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.361 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.361 12:48:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.361 12:48:23 -- nvmf/common.sh@717 -- # local ip 00:20:24.361 12:48:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.361 12:48:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.361 12:48:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.361 12:48:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.361 12:48:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:24.361 12:48:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.361 12:48:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:24.361 12:48:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:24.361 12:48:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:24.361 12:48:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:24.361 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.361 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.619 nvme0n1 00:20:24.619 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.619 12:48:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.619 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.619 12:48:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.619 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.619 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.619 12:48:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.619 12:48:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.619 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.619 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.619 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.619 12:48:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:24.619 12:48:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:24.619 12:48:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:24.619 12:48:23 -- host/auth.sh@44 -- # digest=sha512 00:20:24.619 12:48:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.619 12:48:23 -- host/auth.sh@44 -- # keyid=3 00:20:24.619 12:48:23 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:24.619 12:48:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:24.619 12:48:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:24.619 12:48:23 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:24.619 12:48:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:20:24.619 12:48:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.619 12:48:23 -- host/auth.sh@68 -- # digest=sha512 00:20:24.619 12:48:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:24.619 12:48:23 -- host/auth.sh@68 -- # keyid=3 00:20:24.619 12:48:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.619 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.619 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.619 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.619 12:48:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.619 12:48:23 -- nvmf/common.sh@717 -- # local ip 00:20:24.619 12:48:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.619 12:48:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.619 12:48:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.619 12:48:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.619 12:48:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:24.619 12:48:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.619 12:48:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:24.619 12:48:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:24.619 12:48:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:24.619 12:48:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:24.619 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.619 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.878 nvme0n1 00:20:24.878 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.878 12:48:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.878 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.878 12:48:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.878 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.878 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.878 12:48:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.878 12:48:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.878 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.878 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.878 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.878 12:48:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:24.878 12:48:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:24.878 12:48:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:24.878 12:48:23 -- host/auth.sh@44 -- # digest=sha512 00:20:24.878 12:48:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.878 12:48:23 -- host/auth.sh@44 -- # keyid=4 00:20:24.878 12:48:23 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:24.878 12:48:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:24.878 12:48:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:24.878 12:48:23 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:24.878 12:48:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:20:24.878 12:48:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.878 12:48:23 -- host/auth.sh@68 -- # digest=sha512 00:20:24.878 12:48:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:24.878 12:48:23 -- host/auth.sh@68 -- # keyid=4 00:20:24.878 12:48:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.878 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.878 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:24.878 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.878 12:48:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.878 12:48:23 -- nvmf/common.sh@717 -- # local ip 00:20:24.878 12:48:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.878 12:48:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.878 12:48:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.878 12:48:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.878 12:48:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:24.878 12:48:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.878 12:48:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:24.878 12:48:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:24.878 12:48:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:24.878 12:48:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:24.878 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.878 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.137 nvme0n1 00:20:25.137 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.137 12:48:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.137 12:48:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.137 12:48:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.137 12:48:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.137 12:48:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.137 12:48:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.137 12:48:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.137 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.137 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.137 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.137 12:48:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.137 12:48:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.137 12:48:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:25.137 12:48:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.137 12:48:24 -- host/auth.sh@44 -- # digest=sha512 00:20:25.137 12:48:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.137 12:48:24 -- host/auth.sh@44 -- # keyid=0 00:20:25.137 12:48:24 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:25.137 12:48:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:25.137 12:48:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:25.137 12:48:24 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:25.137 12:48:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:20:25.137 12:48:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.137 12:48:24 -- host/auth.sh@68 -- # digest=sha512 00:20:25.137 12:48:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:25.137 12:48:24 -- host/auth.sh@68 -- # keyid=0 00:20:25.137 12:48:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.137 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.137 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.137 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.137 12:48:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.137 12:48:24 -- nvmf/common.sh@717 -- # local ip 00:20:25.137 12:48:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.137 12:48:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.137 12:48:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.137 12:48:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.137 12:48:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.137 12:48:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.137 12:48:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.137 12:48:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.137 12:48:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.137 12:48:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:25.137 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.137 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.395 nvme0n1 00:20:25.395 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.395 12:48:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.395 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.395 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.395 12:48:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.395 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.395 12:48:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.395 12:48:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.395 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.395 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.395 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.395 12:48:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.395 12:48:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:25.396 12:48:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.396 12:48:24 -- host/auth.sh@44 -- # digest=sha512 00:20:25.396 12:48:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.396 12:48:24 -- host/auth.sh@44 -- # keyid=1 00:20:25.396 12:48:24 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:25.396 12:48:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:25.396 12:48:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:25.396 12:48:24 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:25.396 12:48:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:20:25.396 12:48:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.396 12:48:24 -- host/auth.sh@68 -- # digest=sha512 00:20:25.396 12:48:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:25.396 12:48:24 -- host/auth.sh@68 -- # keyid=1 00:20:25.396 12:48:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.396 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.396 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.396 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.396 12:48:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.396 12:48:24 -- nvmf/common.sh@717 -- # local ip 00:20:25.396 12:48:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.396 12:48:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.396 12:48:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.396 12:48:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.396 12:48:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.396 12:48:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.396 12:48:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.396 12:48:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.396 12:48:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.396 12:48:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:25.396 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.396 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.961 nvme0n1 00:20:25.961 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.961 12:48:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.961 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.961 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.961 12:48:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.961 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.961 12:48:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.961 12:48:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.961 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.961 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.961 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.961 12:48:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.961 12:48:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:25.961 12:48:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.961 12:48:24 -- host/auth.sh@44 -- # digest=sha512 00:20:25.961 12:48:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.961 12:48:24 -- host/auth.sh@44 -- # keyid=2 00:20:25.961 12:48:24 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:25.961 12:48:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:25.961 12:48:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:25.961 12:48:24 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:25.961 12:48:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:20:25.961 12:48:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.961 12:48:24 -- host/auth.sh@68 -- # digest=sha512 00:20:25.962 12:48:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:25.962 12:48:24 -- host/auth.sh@68 -- # keyid=2 00:20:25.962 12:48:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:25.962 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.962 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:25.962 12:48:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.962 12:48:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.962 12:48:24 -- nvmf/common.sh@717 -- # local ip 00:20:25.962 12:48:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.962 12:48:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.962 12:48:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.962 12:48:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.962 12:48:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.962 12:48:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.962 12:48:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.962 12:48:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.962 12:48:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.962 12:48:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:25.962 12:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.962 12:48:24 -- common/autotest_common.sh@10 -- # set +x 00:20:26.219 nvme0n1 00:20:26.219 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.219 12:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.219 12:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.219 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.219 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:26.219 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.219 12:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.219 12:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.219 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.219 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:26.219 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.219 12:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.219 12:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:26.219 12:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.219 12:48:25 -- host/auth.sh@44 -- # digest=sha512 00:20:26.219 12:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:26.219 12:48:25 -- host/auth.sh@44 -- # keyid=3 00:20:26.219 12:48:25 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:26.219 12:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:26.219 12:48:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:26.219 12:48:25 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:26.219 12:48:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:20:26.219 12:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.219 12:48:25 -- host/auth.sh@68 -- # digest=sha512 00:20:26.219 12:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:26.219 12:48:25 -- host/auth.sh@68 -- # keyid=3 00:20:26.219 12:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:26.219 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.219 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:26.477 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.477 12:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.477 12:48:25 -- nvmf/common.sh@717 -- # local ip 00:20:26.477 12:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.477 12:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.477 12:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.477 12:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.477 12:48:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:26.477 12:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.477 12:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:26.477 12:48:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:26.477 12:48:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:26.477 12:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:26.477 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.477 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:26.735 nvme0n1 00:20:26.735 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.735 12:48:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.735 12:48:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.735 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.735 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:26.735 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.735 12:48:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.735 12:48:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.735 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.735 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:26.735 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.735 12:48:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.735 12:48:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:26.735 12:48:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.735 12:48:25 -- host/auth.sh@44 -- # digest=sha512 00:20:26.735 12:48:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:26.735 12:48:25 -- host/auth.sh@44 -- # keyid=4 00:20:26.735 12:48:25 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:26.735 12:48:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:26.735 12:48:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:26.735 12:48:25 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:26.735 12:48:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:20:26.735 12:48:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.735 12:48:25 -- host/auth.sh@68 -- # digest=sha512 00:20:26.735 12:48:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:26.735 12:48:25 -- host/auth.sh@68 -- # keyid=4 00:20:26.735 12:48:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:26.735 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.735 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:26.735 12:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.735 12:48:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.735 12:48:25 -- nvmf/common.sh@717 -- # local ip 00:20:26.735 12:48:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.735 12:48:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.735 12:48:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.735 12:48:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.735 12:48:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:26.735 12:48:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.735 12:48:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:26.735 12:48:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:26.735 12:48:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:26.735 12:48:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.735 12:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.735 12:48:25 -- common/autotest_common.sh@10 -- # set +x 00:20:26.993 nvme0n1 00:20:26.993 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.993 12:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.993 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.993 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:26.993 12:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.993 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.251 12:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.251 12:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.251 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.251 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:27.251 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.251 12:48:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.251 12:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.251 12:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:27.251 12:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.251 12:48:26 -- host/auth.sh@44 -- # digest=sha512 00:20:27.251 12:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:27.251 12:48:26 -- host/auth.sh@44 -- # keyid=0 00:20:27.251 12:48:26 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:27.251 12:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:27.251 12:48:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:27.251 12:48:26 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:27.251 12:48:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:20:27.251 12:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.251 12:48:26 -- host/auth.sh@68 -- # digest=sha512 00:20:27.251 12:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:27.252 12:48:26 -- host/auth.sh@68 -- # keyid=0 00:20:27.252 12:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.252 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.252 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:27.252 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.252 12:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.252 12:48:26 -- nvmf/common.sh@717 -- # local ip 00:20:27.252 12:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.252 12:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.252 12:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.252 12:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.252 12:48:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:27.252 12:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.252 12:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:27.252 12:48:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:27.252 12:48:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:27.252 12:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:27.252 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.252 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:27.819 nvme0n1 00:20:27.819 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.819 12:48:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.819 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.819 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:27.819 12:48:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.819 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.819 12:48:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.819 12:48:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.819 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.819 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:27.819 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.819 12:48:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.819 12:48:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:27.819 12:48:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.819 12:48:26 -- host/auth.sh@44 -- # digest=sha512 00:20:27.819 12:48:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:27.819 12:48:26 -- host/auth.sh@44 -- # keyid=1 00:20:27.819 12:48:26 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:27.819 12:48:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:27.819 12:48:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:27.819 12:48:26 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:27.819 12:48:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:20:27.819 12:48:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.819 12:48:26 -- host/auth.sh@68 -- # digest=sha512 00:20:27.819 12:48:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:27.819 12:48:26 -- host/auth.sh@68 -- # keyid=1 00:20:27.819 12:48:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.819 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.820 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:27.820 12:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.820 12:48:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.820 12:48:26 -- nvmf/common.sh@717 -- # local ip 00:20:27.820 12:48:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.820 12:48:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.820 12:48:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.820 12:48:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.820 12:48:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:27.820 12:48:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.820 12:48:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:27.820 12:48:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:27.820 12:48:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:27.820 12:48:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:27.820 12:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.820 12:48:26 -- common/autotest_common.sh@10 -- # set +x 00:20:28.385 nvme0n1 00:20:28.385 12:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.385 12:48:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.385 12:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.385 12:48:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.385 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.385 12:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.385 12:48:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.385 12:48:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.385 12:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.385 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.385 12:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.385 12:48:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:28.385 12:48:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:28.385 12:48:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:28.385 12:48:27 -- host/auth.sh@44 -- # digest=sha512 00:20:28.385 12:48:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:28.385 12:48:27 -- host/auth.sh@44 -- # keyid=2 00:20:28.385 12:48:27 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:28.385 12:48:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:28.385 12:48:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:28.385 12:48:27 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:28.385 12:48:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:20:28.385 12:48:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:28.385 12:48:27 -- host/auth.sh@68 -- # digest=sha512 00:20:28.385 12:48:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:28.385 12:48:27 -- host/auth.sh@68 -- # keyid=2 00:20:28.385 12:48:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:28.385 12:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.385 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.385 12:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.385 12:48:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:28.385 12:48:27 -- nvmf/common.sh@717 -- # local ip 00:20:28.385 12:48:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:28.385 12:48:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:28.385 12:48:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.385 12:48:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.385 12:48:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:28.385 12:48:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.385 12:48:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:28.385 12:48:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:28.385 12:48:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:28.385 12:48:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:28.385 12:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.385 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.951 nvme0n1 00:20:28.951 12:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.951 12:48:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.951 12:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.951 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.951 12:48:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.951 12:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.951 12:48:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.951 12:48:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.951 12:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.951 12:48:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.951 12:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.951 12:48:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:28.951 12:48:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:28.951 12:48:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:28.951 12:48:28 -- host/auth.sh@44 -- # digest=sha512 00:20:28.951 12:48:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:28.951 12:48:28 -- host/auth.sh@44 -- # keyid=3 00:20:28.951 12:48:28 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:28.951 12:48:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:28.951 12:48:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:28.951 12:48:28 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:28.951 12:48:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:20:28.951 12:48:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:28.951 12:48:28 -- host/auth.sh@68 -- # digest=sha512 00:20:28.951 12:48:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:28.951 12:48:28 -- host/auth.sh@68 -- # keyid=3 00:20:28.951 12:48:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:28.951 12:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.951 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:29.209 12:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.209 12:48:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.209 12:48:28 -- nvmf/common.sh@717 -- # local ip 00:20:29.209 12:48:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.209 12:48:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.209 12:48:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.209 12:48:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.209 12:48:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.209 12:48:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.209 12:48:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.209 12:48:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.209 12:48:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.209 12:48:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:29.209 12:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.209 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:29.776 nvme0n1 00:20:29.776 12:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.776 12:48:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.776 12:48:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.776 12:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.776 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:29.776 12:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.776 12:48:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.776 12:48:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.776 12:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.776 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:29.776 12:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.776 12:48:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.776 12:48:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:29.776 12:48:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.776 12:48:28 -- host/auth.sh@44 -- # digest=sha512 00:20:29.776 12:48:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:29.776 12:48:28 -- host/auth.sh@44 -- # keyid=4 00:20:29.776 12:48:28 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:29.776 12:48:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:29.776 12:48:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:29.776 12:48:28 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:29.776 12:48:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:20:29.776 12:48:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.776 12:48:28 -- host/auth.sh@68 -- # digest=sha512 00:20:29.776 12:48:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:29.776 12:48:28 -- host/auth.sh@68 -- # keyid=4 00:20:29.776 12:48:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:29.776 12:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.776 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:29.776 12:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.776 12:48:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.776 12:48:28 -- nvmf/common.sh@717 -- # local ip 00:20:29.776 12:48:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.776 12:48:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.776 12:48:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.776 12:48:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.776 12:48:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.776 12:48:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.776 12:48:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.776 12:48:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.776 12:48:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.776 12:48:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:29.776 12:48:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.776 12:48:28 -- common/autotest_common.sh@10 -- # set +x 00:20:30.342 nvme0n1 00:20:30.342 12:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.342 12:48:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.342 12:48:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.342 12:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.342 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:20:30.342 12:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.342 12:48:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.342 12:48:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.342 12:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.342 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:20:30.342 12:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.342 12:48:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.342 12:48:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.342 12:48:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:30.342 12:48:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.342 12:48:29 -- host/auth.sh@44 -- # digest=sha512 00:20:30.342 12:48:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:30.342 12:48:29 -- host/auth.sh@44 -- # keyid=0 00:20:30.342 12:48:29 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:30.342 12:48:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:30.342 12:48:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:30.342 12:48:29 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI3NGY4ZjQzYWY4ZjA0ZTM2YTgwMTNkYTVhMGY5YjK11WGv: 00:20:30.342 12:48:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:20:30.342 12:48:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.342 12:48:29 -- host/auth.sh@68 -- # digest=sha512 00:20:30.342 12:48:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:30.342 12:48:29 -- host/auth.sh@68 -- # keyid=0 00:20:30.342 12:48:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.342 12:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.342 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:20:30.342 12:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.342 12:48:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.342 12:48:29 -- nvmf/common.sh@717 -- # local ip 00:20:30.342 12:48:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.342 12:48:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.342 12:48:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.342 12:48:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.342 12:48:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.342 12:48:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.342 12:48:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.342 12:48:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.342 12:48:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.342 12:48:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:30.342 12:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.342 12:48:29 -- common/autotest_common.sh@10 -- # set +x 00:20:31.715 nvme0n1 00:20:31.715 12:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.715 12:48:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.715 12:48:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.715 12:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.715 12:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:31.715 12:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.715 12:48:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.715 12:48:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.715 12:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.715 12:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:31.716 12:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.716 12:48:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.716 12:48:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:31.716 12:48:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.716 12:48:30 -- host/auth.sh@44 -- # digest=sha512 00:20:31.716 12:48:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.716 12:48:30 -- host/auth.sh@44 -- # keyid=1 00:20:31.716 12:48:30 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:31.716 12:48:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:31.716 12:48:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:31.716 12:48:30 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:31.716 12:48:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:20:31.716 12:48:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.716 12:48:30 -- host/auth.sh@68 -- # digest=sha512 00:20:31.716 12:48:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:31.716 12:48:30 -- host/auth.sh@68 -- # keyid=1 00:20:31.716 12:48:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:31.716 12:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.716 12:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:31.716 12:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.716 12:48:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.716 12:48:30 -- nvmf/common.sh@717 -- # local ip 00:20:31.716 12:48:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.716 12:48:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.716 12:48:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.716 12:48:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.716 12:48:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.716 12:48:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.716 12:48:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.716 12:48:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.716 12:48:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.716 12:48:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:31.716 12:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.716 12:48:30 -- common/autotest_common.sh@10 -- # set +x 00:20:32.650 nvme0n1 00:20:32.650 12:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.650 12:48:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.650 12:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.650 12:48:31 -- common/autotest_common.sh@10 -- # set +x 00:20:32.650 12:48:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:32.650 12:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.650 12:48:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.650 12:48:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.650 12:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.650 12:48:31 -- common/autotest_common.sh@10 -- # set +x 00:20:32.650 12:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.650 12:48:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:32.650 12:48:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:32.650 12:48:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:32.650 12:48:31 -- host/auth.sh@44 -- # digest=sha512 00:20:32.650 12:48:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.650 12:48:31 -- host/auth.sh@44 -- # keyid=2 00:20:32.650 12:48:31 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:32.650 12:48:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:32.650 12:48:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:32.650 12:48:31 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQyODc5MDE5MGQwNjMyZjAyNmE1M2FjYWMyODQ1YTVirL6m: 00:20:32.650 12:48:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:20:32.650 12:48:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:32.650 12:48:31 -- host/auth.sh@68 -- # digest=sha512 00:20:32.650 12:48:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:32.650 12:48:31 -- host/auth.sh@68 -- # keyid=2 00:20:32.650 12:48:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:32.650 12:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.650 12:48:31 -- common/autotest_common.sh@10 -- # set +x 00:20:32.650 12:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.650 12:48:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:32.650 12:48:31 -- nvmf/common.sh@717 -- # local ip 00:20:32.650 12:48:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:32.650 12:48:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:32.650 12:48:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.650 12:48:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.650 12:48:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:32.650 12:48:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.650 12:48:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:32.650 12:48:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:32.650 12:48:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:32.650 12:48:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:32.650 12:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.650 12:48:31 -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 nvme0n1 00:20:33.584 12:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.584 12:48:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.584 12:48:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.584 12:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.584 12:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 12:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.584 12:48:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.584 12:48:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.584 12:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.584 12:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 12:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.584 12:48:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.584 12:48:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:33.584 12:48:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.584 12:48:32 -- host/auth.sh@44 -- # digest=sha512 00:20:33.584 12:48:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.584 12:48:32 -- host/auth.sh@44 -- # keyid=3 00:20:33.584 12:48:32 -- host/auth.sh@45 -- # key=DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:33.584 12:48:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:33.584 12:48:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:33.584 12:48:32 -- host/auth.sh@49 -- # echo DHHC-1:02:Y2FiM2ZkZTFiNTA2ZWM2ODhjMGFkNTYwOGNlZTA3MzQzMDI1YmNmMjc5ZTkzYTlji3mYsw==: 00:20:33.584 12:48:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:20:33.584 12:48:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.584 12:48:32 -- host/auth.sh@68 -- # digest=sha512 00:20:33.584 12:48:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:33.584 12:48:32 -- host/auth.sh@68 -- # keyid=3 00:20:33.584 12:48:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:33.584 12:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.584 12:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:33.584 12:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.584 12:48:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.584 12:48:32 -- nvmf/common.sh@717 -- # local ip 00:20:33.584 12:48:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.584 12:48:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.584 12:48:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.585 12:48:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.585 12:48:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:33.585 12:48:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.585 12:48:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:33.585 12:48:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:33.585 12:48:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:33.585 12:48:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:33.585 12:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.585 12:48:32 -- common/autotest_common.sh@10 -- # set +x 00:20:34.519 nvme0n1 00:20:34.520 12:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.783 12:48:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.783 12:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.783 12:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:34.783 12:48:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.783 12:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.783 12:48:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.783 12:48:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.783 12:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.783 12:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:34.783 12:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.783 12:48:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.783 12:48:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:34.783 12:48:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.783 12:48:33 -- host/auth.sh@44 -- # digest=sha512 00:20:34.783 12:48:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:34.783 12:48:33 -- host/auth.sh@44 -- # keyid=4 00:20:34.783 12:48:33 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:34.783 12:48:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:34.783 12:48:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:34.783 12:48:33 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Y5MjAwNDVjYjhjYzEwMjhjOWVkNDA0MDQwYjJhZDQ2NTMwNDM3NmVlOWE1NDIyMzVjNThjNzIzY2UzNGMwNOLe90I=: 00:20:34.783 12:48:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:20:34.783 12:48:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.783 12:48:33 -- host/auth.sh@68 -- # digest=sha512 00:20:34.783 12:48:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:34.783 12:48:33 -- host/auth.sh@68 -- # keyid=4 00:20:34.783 12:48:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:34.783 12:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.783 12:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:34.783 12:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.783 12:48:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.783 12:48:33 -- nvmf/common.sh@717 -- # local ip 00:20:34.783 12:48:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.783 12:48:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.783 12:48:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.783 12:48:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.783 12:48:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:34.783 12:48:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.783 12:48:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:34.783 12:48:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:34.783 12:48:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:34.783 12:48:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.783 12:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.783 12:48:33 -- common/autotest_common.sh@10 -- # set +x 00:20:35.716 nvme0n1 00:20:35.716 12:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.716 12:48:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.716 12:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.716 12:48:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.716 12:48:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.716 12:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.716 12:48:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.716 12:48:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.716 12:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.716 12:48:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.716 12:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.716 12:48:34 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:35.716 12:48:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.716 12:48:34 -- host/auth.sh@44 -- # digest=sha256 00:20:35.716 12:48:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.716 12:48:34 -- host/auth.sh@44 -- # keyid=1 00:20:35.716 12:48:34 -- host/auth.sh@45 -- # key=DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:35.716 12:48:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:35.716 12:48:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:35.716 12:48:34 -- host/auth.sh@49 -- # echo DHHC-1:00:YzA3ZGVkYzljMjBiM2VkMDVkY2UzMTU4MWNhNDE5NWM5NTYzNjg0YWFjZDY3YTI1H8zpJw==: 00:20:35.716 12:48:34 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:35.716 12:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.716 12:48:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.716 12:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.716 12:48:34 -- host/auth.sh@119 -- # get_main_ns_ip 00:20:35.716 12:48:34 -- nvmf/common.sh@717 -- # local ip 00:20:35.716 12:48:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.716 12:48:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.716 12:48:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.716 12:48:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.716 12:48:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.716 12:48:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.716 12:48:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.716 12:48:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.716 12:48:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.716 12:48:34 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:35.716 12:48:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:35.716 12:48:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:35.716 12:48:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:35.716 12:48:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:35.716 12:48:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:35.716 12:48:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:35.716 12:48:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:35.716 12:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.716 12:48:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.974 request: 00:20:35.974 { 00:20:35.974 "name": "nvme0", 00:20:35.974 "trtype": "tcp", 00:20:35.974 "traddr": "10.0.0.1", 00:20:35.974 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:35.974 "adrfam": "ipv4", 00:20:35.974 "trsvcid": "4420", 00:20:35.974 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:35.974 "method": "bdev_nvme_attach_controller", 00:20:35.974 "req_id": 1 00:20:35.974 } 00:20:35.974 Got JSON-RPC error response 00:20:35.974 response: 00:20:35.974 { 00:20:35.974 "code": -32602, 00:20:35.974 "message": "Invalid parameters" 00:20:35.974 } 00:20:35.974 12:48:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:35.974 12:48:34 -- common/autotest_common.sh@641 -- # es=1 00:20:35.974 12:48:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:35.974 12:48:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:35.974 12:48:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:35.974 12:48:34 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.974 12:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.974 12:48:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.974 12:48:34 -- host/auth.sh@121 -- # jq length 00:20:35.974 12:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.974 12:48:34 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:20:35.974 12:48:34 -- host/auth.sh@124 -- # get_main_ns_ip 00:20:35.974 12:48:34 -- nvmf/common.sh@717 -- # local ip 00:20:35.974 12:48:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.974 12:48:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.974 12:48:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.974 12:48:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.974 12:48:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.974 12:48:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.974 12:48:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.974 12:48:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.974 12:48:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.974 12:48:34 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:35.974 12:48:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:35.975 12:48:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:35.975 12:48:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:35.975 12:48:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:35.975 12:48:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:35.975 12:48:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:35.975 12:48:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:35.975 12:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.975 12:48:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.975 request: 00:20:35.975 { 00:20:35.975 "name": "nvme0", 00:20:35.975 "trtype": "tcp", 00:20:35.975 "traddr": "10.0.0.1", 00:20:35.975 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:35.975 "adrfam": "ipv4", 00:20:35.975 "trsvcid": "4420", 00:20:35.975 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:35.975 "dhchap_key": "key2", 00:20:35.975 "method": "bdev_nvme_attach_controller", 00:20:35.975 "req_id": 1 00:20:35.975 } 00:20:35.975 Got JSON-RPC error response 00:20:35.975 response: 00:20:35.975 { 00:20:35.975 "code": -32602, 00:20:35.975 "message": "Invalid parameters" 00:20:35.975 } 00:20:35.975 12:48:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:35.975 12:48:34 -- common/autotest_common.sh@641 -- # es=1 00:20:35.975 12:48:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:35.975 12:48:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:35.975 12:48:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:35.975 12:48:34 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.975 12:48:34 -- host/auth.sh@127 -- # jq length 00:20:35.975 12:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.975 12:48:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.975 12:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.975 12:48:34 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:20:35.975 12:48:34 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:35.975 12:48:34 -- host/auth.sh@130 -- # cleanup 00:20:35.975 12:48:34 -- host/auth.sh@24 -- # nvmftestfini 00:20:35.975 12:48:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:35.975 12:48:34 -- nvmf/common.sh@117 -- # sync 00:20:35.975 12:48:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.975 12:48:35 -- nvmf/common.sh@120 -- # set +e 00:20:35.975 12:48:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.975 12:48:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.975 rmmod nvme_tcp 00:20:35.975 rmmod nvme_fabrics 00:20:35.975 12:48:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.975 12:48:35 -- nvmf/common.sh@124 -- # set -e 00:20:35.975 12:48:35 -- nvmf/common.sh@125 -- # return 0 00:20:35.975 12:48:35 -- nvmf/common.sh@478 -- # '[' -n 1247449 ']' 00:20:35.975 12:48:35 -- nvmf/common.sh@479 -- # killprocess 1247449 00:20:35.975 12:48:35 -- common/autotest_common.sh@936 -- # '[' -z 1247449 ']' 00:20:35.975 12:48:35 -- common/autotest_common.sh@940 -- # kill -0 1247449 00:20:35.975 12:48:35 -- common/autotest_common.sh@941 -- # uname 00:20:35.975 12:48:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:36.233 12:48:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1247449 00:20:36.233 12:48:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:36.233 12:48:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:36.233 12:48:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1247449' 00:20:36.233 killing process with pid 1247449 00:20:36.233 12:48:35 -- common/autotest_common.sh@955 -- # kill 1247449 00:20:36.233 12:48:35 -- common/autotest_common.sh@960 -- # wait 1247449 00:20:36.492 12:48:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:36.492 12:48:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:36.492 12:48:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:36.492 12:48:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.492 12:48:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:36.492 12:48:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.492 12:48:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.492 12:48:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.392 12:48:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:38.392 12:48:37 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:38.392 12:48:37 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:38.392 12:48:37 -- host/auth.sh@27 -- # clean_kernel_target 00:20:38.392 12:48:37 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:38.392 12:48:37 -- nvmf/common.sh@675 -- # echo 0 00:20:38.392 12:48:37 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:38.392 12:48:37 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:38.392 12:48:37 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:38.392 12:48:37 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:38.392 12:48:37 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:38.392 12:48:37 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:38.392 12:48:37 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:39.766 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:39.766 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:40.025 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:40.025 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:40.025 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:40.025 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:40.025 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:40.025 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:40.025 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:40.025 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:40.025 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:40.025 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:40.025 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:40.025 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:40.025 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:40.025 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:41.928 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:20:41.928 12:48:40 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0BX /tmp/spdk.key-null.5Kh /tmp/spdk.key-sha256.H5m /tmp/spdk.key-sha384.Ljw /tmp/spdk.key-sha512.PQw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:20:41.928 12:48:40 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:43.304 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:43.304 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:43.304 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:43.304 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:43.304 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:43.304 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:43.304 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:43.304 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:43.304 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:43.304 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:43.304 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:43.304 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:43.304 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:43.304 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:43.304 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:43.304 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:43.304 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:43.304 00:20:43.304 real 0m54.193s 00:20:43.304 user 0m50.276s 00:20:43.304 sys 0m6.723s 00:20:43.304 12:48:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:43.304 12:48:42 -- common/autotest_common.sh@10 -- # set +x 00:20:43.304 ************************************ 00:20:43.304 END TEST nvmf_auth 00:20:43.304 ************************************ 00:20:43.563 12:48:42 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:20:43.563 12:48:42 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:43.563 12:48:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:43.563 12:48:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:43.563 12:48:42 -- common/autotest_common.sh@10 -- # set +x 00:20:43.563 ************************************ 00:20:43.563 START TEST nvmf_digest 00:20:43.563 ************************************ 00:20:43.563 12:48:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:43.563 * Looking for test storage... 00:20:43.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:43.563 12:48:42 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.563 12:48:42 -- nvmf/common.sh@7 -- # uname -s 00:20:43.563 12:48:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.563 12:48:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.563 12:48:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.563 12:48:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.563 12:48:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.563 12:48:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.563 12:48:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.563 12:48:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.563 12:48:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.563 12:48:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.563 12:48:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:43.563 12:48:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:43.563 12:48:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.563 12:48:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.563 12:48:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.563 12:48:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.563 12:48:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.563 12:48:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.563 12:48:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.563 12:48:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.563 12:48:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.563 12:48:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.563 12:48:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.563 12:48:42 -- paths/export.sh@5 -- # export PATH 00:20:43.563 12:48:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.563 12:48:42 -- nvmf/common.sh@47 -- # : 0 00:20:43.563 12:48:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.563 12:48:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.563 12:48:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.563 12:48:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.563 12:48:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.563 12:48:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.563 12:48:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.563 12:48:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.563 12:48:42 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:43.563 12:48:42 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:43.563 12:48:42 -- host/digest.sh@16 -- # runtime=2 00:20:43.563 12:48:42 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:43.563 12:48:42 -- host/digest.sh@138 -- # nvmftestinit 00:20:43.563 12:48:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:43.563 12:48:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.563 12:48:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:43.563 12:48:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:43.563 12:48:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:43.563 12:48:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.563 12:48:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.563 12:48:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.563 12:48:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:43.564 12:48:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:43.564 12:48:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.564 12:48:42 -- common/autotest_common.sh@10 -- # set +x 00:20:46.141 12:48:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:46.141 12:48:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:46.141 12:48:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:46.141 12:48:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:46.141 12:48:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:46.141 12:48:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:46.141 12:48:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:46.141 12:48:45 -- nvmf/common.sh@295 -- # net_devs=() 00:20:46.141 12:48:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:46.141 12:48:45 -- nvmf/common.sh@296 -- # e810=() 00:20:46.141 12:48:45 -- nvmf/common.sh@296 -- # local -ga e810 00:20:46.141 12:48:45 -- nvmf/common.sh@297 -- # x722=() 00:20:46.141 12:48:45 -- nvmf/common.sh@297 -- # local -ga x722 00:20:46.141 12:48:45 -- nvmf/common.sh@298 -- # mlx=() 00:20:46.141 12:48:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:46.141 12:48:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.141 12:48:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:46.141 12:48:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:46.141 12:48:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:46.141 12:48:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.141 12:48:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:20:46.141 Found 0000:82:00.0 (0x8086 - 0x159b) 00:20:46.141 12:48:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.141 12:48:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:20:46.141 Found 0000:82:00.1 (0x8086 - 0x159b) 00:20:46.141 12:48:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:46.141 12:48:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.141 12:48:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.141 12:48:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:46.141 12:48:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.141 12:48:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:20:46.141 Found net devices under 0000:82:00.0: cvl_0_0 00:20:46.141 12:48:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.141 12:48:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.141 12:48:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.141 12:48:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:46.141 12:48:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.141 12:48:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:20:46.141 Found net devices under 0000:82:00.1: cvl_0_1 00:20:46.141 12:48:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.141 12:48:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:46.141 12:48:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:46.141 12:48:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:46.141 12:48:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.141 12:48:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.141 12:48:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.141 12:48:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:46.141 12:48:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.141 12:48:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.141 12:48:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:46.141 12:48:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.141 12:48:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.141 12:48:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:46.141 12:48:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:46.141 12:48:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.141 12:48:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.141 12:48:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.141 12:48:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.141 12:48:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:46.141 12:48:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.141 12:48:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.141 12:48:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.141 12:48:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:46.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:20:46.141 00:20:46.141 --- 10.0.0.2 ping statistics --- 00:20:46.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.141 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:20:46.141 12:48:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:20:46.141 00:20:46.141 --- 10.0.0.1 ping statistics --- 00:20:46.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.141 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:46.141 12:48:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.141 12:48:45 -- nvmf/common.sh@411 -- # return 0 00:20:46.141 12:48:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:46.141 12:48:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.141 12:48:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:46.141 12:48:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.141 12:48:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:46.141 12:48:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:46.141 12:48:45 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:46.141 12:48:45 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:46.141 12:48:45 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:46.141 12:48:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:46.141 12:48:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:46.141 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:20:46.400 ************************************ 00:20:46.400 START TEST nvmf_digest_clean 00:20:46.400 ************************************ 00:20:46.400 12:48:45 -- common/autotest_common.sh@1111 -- # run_digest 00:20:46.400 12:48:45 -- host/digest.sh@120 -- # local dsa_initiator 00:20:46.400 12:48:45 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:46.400 12:48:45 -- host/digest.sh@121 -- # dsa_initiator=false 00:20:46.400 12:48:45 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:46.400 12:48:45 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:46.400 12:48:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:46.400 12:48:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:46.400 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:20:46.400 12:48:45 -- nvmf/common.sh@470 -- # nvmfpid=1258581 00:20:46.400 12:48:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:46.400 12:48:45 -- nvmf/common.sh@471 -- # waitforlisten 1258581 00:20:46.400 12:48:45 -- common/autotest_common.sh@817 -- # '[' -z 1258581 ']' 00:20:46.400 12:48:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.400 12:48:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:46.400 12:48:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.400 12:48:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:46.400 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:20:46.400 [2024-04-16 12:48:45.325423] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:20:46.400 [2024-04-16 12:48:45.325494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.400 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.400 [2024-04-16 12:48:45.399810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.658 [2024-04-16 12:48:45.505584] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.658 [2024-04-16 12:48:45.505651] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.658 [2024-04-16 12:48:45.505666] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.658 [2024-04-16 12:48:45.505678] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.658 [2024-04-16 12:48:45.505689] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.658 [2024-04-16 12:48:45.505718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.658 12:48:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:46.658 12:48:45 -- common/autotest_common.sh@850 -- # return 0 00:20:46.658 12:48:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:46.658 12:48:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:46.658 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:20:46.658 12:48:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.658 12:48:45 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:46.658 12:48:45 -- host/digest.sh@126 -- # common_target_config 00:20:46.658 12:48:45 -- host/digest.sh@43 -- # rpc_cmd 00:20:46.658 12:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.658 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:20:46.658 null0 00:20:46.658 [2024-04-16 12:48:45.718105] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.916 [2024-04-16 12:48:45.742353] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.916 12:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.916 12:48:45 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:46.916 12:48:45 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:46.916 12:48:45 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:46.916 12:48:45 -- host/digest.sh@80 -- # rw=randread 00:20:46.916 12:48:45 -- host/digest.sh@80 -- # bs=4096 00:20:46.916 12:48:45 -- host/digest.sh@80 -- # qd=128 00:20:46.916 12:48:45 -- host/digest.sh@80 -- # scan_dsa=false 00:20:46.916 12:48:45 -- host/digest.sh@83 -- # bperfpid=1258639 00:20:46.916 12:48:45 -- host/digest.sh@84 -- # waitforlisten 1258639 /var/tmp/bperf.sock 00:20:46.916 12:48:45 -- common/autotest_common.sh@817 -- # '[' -z 1258639 ']' 00:20:46.916 12:48:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:46.916 12:48:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:46.916 12:48:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:46.916 12:48:45 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:46.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:46.916 12:48:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:46.916 12:48:45 -- common/autotest_common.sh@10 -- # set +x 00:20:46.916 [2024-04-16 12:48:45.791206] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:20:46.916 [2024-04-16 12:48:45.791276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258639 ] 00:20:46.916 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.916 [2024-04-16 12:48:45.861932] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.916 [2024-04-16 12:48:45.976149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.848 12:48:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:47.848 12:48:46 -- common/autotest_common.sh@850 -- # return 0 00:20:47.848 12:48:46 -- host/digest.sh@86 -- # false 00:20:47.848 12:48:46 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:47.848 12:48:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:48.107 12:48:47 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:48.107 12:48:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:48.365 nvme0n1 00:20:48.365 12:48:47 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:48.365 12:48:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:48.623 Running I/O for 2 seconds... 00:20:50.523 00:20:50.523 Latency(us) 00:20:50.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.523 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:50.523 nvme0n1 : 2.00 18549.42 72.46 0.00 0.00 6891.43 3543.80 11942.12 00:20:50.523 =================================================================================================================== 00:20:50.523 Total : 18549.42 72.46 0.00 0.00 6891.43 3543.80 11942.12 00:20:50.523 0 00:20:50.523 12:48:49 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:50.523 12:48:49 -- host/digest.sh@93 -- # get_accel_stats 00:20:50.523 12:48:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:50.523 12:48:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:50.523 | select(.opcode=="crc32c") 00:20:50.523 | "\(.module_name) \(.executed)"' 00:20:50.523 12:48:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:50.781 12:48:49 -- host/digest.sh@94 -- # false 00:20:50.781 12:48:49 -- host/digest.sh@94 -- # exp_module=software 00:20:50.781 12:48:49 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:50.781 12:48:49 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:50.781 12:48:49 -- host/digest.sh@98 -- # killprocess 1258639 00:20:50.781 12:48:49 -- common/autotest_common.sh@936 -- # '[' -z 1258639 ']' 00:20:50.781 12:48:49 -- common/autotest_common.sh@940 -- # kill -0 1258639 00:20:50.782 12:48:49 -- common/autotest_common.sh@941 -- # uname 00:20:50.782 12:48:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.782 12:48:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1258639 00:20:50.782 12:48:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:50.782 12:48:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:50.782 12:48:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1258639' 00:20:50.782 killing process with pid 1258639 00:20:50.782 12:48:49 -- common/autotest_common.sh@955 -- # kill 1258639 00:20:50.782 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.782 00:20:50.782 Latency(us) 00:20:50.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.782 =================================================================================================================== 00:20:50.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.782 12:48:49 -- common/autotest_common.sh@960 -- # wait 1258639 00:20:51.039 12:48:50 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:51.039 12:48:50 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:51.040 12:48:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:51.040 12:48:50 -- host/digest.sh@80 -- # rw=randread 00:20:51.040 12:48:50 -- host/digest.sh@80 -- # bs=131072 00:20:51.040 12:48:50 -- host/digest.sh@80 -- # qd=16 00:20:51.040 12:48:50 -- host/digest.sh@80 -- # scan_dsa=false 00:20:51.040 12:48:50 -- host/digest.sh@83 -- # bperfpid=1259175 00:20:51.040 12:48:50 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:51.040 12:48:50 -- host/digest.sh@84 -- # waitforlisten 1259175 /var/tmp/bperf.sock 00:20:51.040 12:48:50 -- common/autotest_common.sh@817 -- # '[' -z 1259175 ']' 00:20:51.040 12:48:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:51.040 12:48:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:51.040 12:48:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:51.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:51.040 12:48:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:51.040 12:48:50 -- common/autotest_common.sh@10 -- # set +x 00:20:51.297 [2024-04-16 12:48:50.110202] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:20:51.297 [2024-04-16 12:48:50.110290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259175 ] 00:20:51.297 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:51.297 Zero copy mechanism will not be used. 00:20:51.297 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.297 [2024-04-16 12:48:50.182413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.297 [2024-04-16 12:48:50.295956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.230 12:48:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:52.230 12:48:51 -- common/autotest_common.sh@850 -- # return 0 00:20:52.230 12:48:51 -- host/digest.sh@86 -- # false 00:20:52.230 12:48:51 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:52.230 12:48:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:52.489 12:48:51 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:52.489 12:48:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:53.054 nvme0n1 00:20:53.055 12:48:51 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:53.055 12:48:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:53.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:53.055 Zero copy mechanism will not be used. 00:20:53.055 Running I/O for 2 seconds... 00:20:55.617 00:20:55.617 Latency(us) 00:20:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.617 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:55.617 nvme0n1 : 2.00 2445.07 305.63 0.00 0.00 6539.16 4563.25 13883.92 00:20:55.617 =================================================================================================================== 00:20:55.617 Total : 2445.07 305.63 0.00 0.00 6539.16 4563.25 13883.92 00:20:55.617 0 00:20:55.617 12:48:54 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:55.617 12:48:54 -- host/digest.sh@93 -- # get_accel_stats 00:20:55.617 12:48:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:55.617 12:48:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:55.617 12:48:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:55.617 | select(.opcode=="crc32c") 00:20:55.617 | "\(.module_name) \(.executed)"' 00:20:55.617 12:48:54 -- host/digest.sh@94 -- # false 00:20:55.617 12:48:54 -- host/digest.sh@94 -- # exp_module=software 00:20:55.617 12:48:54 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:55.617 12:48:54 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:55.617 12:48:54 -- host/digest.sh@98 -- # killprocess 1259175 00:20:55.617 12:48:54 -- common/autotest_common.sh@936 -- # '[' -z 1259175 ']' 00:20:55.617 12:48:54 -- common/autotest_common.sh@940 -- # kill -0 1259175 00:20:55.617 12:48:54 -- common/autotest_common.sh@941 -- # uname 00:20:55.617 12:48:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:55.617 12:48:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1259175 00:20:55.617 12:48:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:55.617 12:48:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:55.617 12:48:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1259175' 00:20:55.617 killing process with pid 1259175 00:20:55.617 12:48:54 -- common/autotest_common.sh@955 -- # kill 1259175 00:20:55.617 Received shutdown signal, test time was about 2.000000 seconds 00:20:55.617 00:20:55.617 Latency(us) 00:20:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.617 =================================================================================================================== 00:20:55.617 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.617 12:48:54 -- common/autotest_common.sh@960 -- # wait 1259175 00:20:55.617 12:48:54 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:55.617 12:48:54 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:55.617 12:48:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:55.617 12:48:54 -- host/digest.sh@80 -- # rw=randwrite 00:20:55.617 12:48:54 -- host/digest.sh@80 -- # bs=4096 00:20:55.617 12:48:54 -- host/digest.sh@80 -- # qd=128 00:20:55.617 12:48:54 -- host/digest.sh@80 -- # scan_dsa=false 00:20:55.876 12:48:54 -- host/digest.sh@83 -- # bperfpid=1259718 00:20:55.876 12:48:54 -- host/digest.sh@84 -- # waitforlisten 1259718 /var/tmp/bperf.sock 00:20:55.876 12:48:54 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:55.876 12:48:54 -- common/autotest_common.sh@817 -- # '[' -z 1259718 ']' 00:20:55.876 12:48:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:55.876 12:48:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.876 12:48:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:55.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:55.876 12:48:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.876 12:48:54 -- common/autotest_common.sh@10 -- # set +x 00:20:55.876 [2024-04-16 12:48:54.727357] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:20:55.876 [2024-04-16 12:48:54.727445] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259718 ] 00:20:55.876 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.876 [2024-04-16 12:48:54.799604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.876 [2024-04-16 12:48:54.918179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.809 12:48:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.809 12:48:55 -- common/autotest_common.sh@850 -- # return 0 00:20:56.809 12:48:55 -- host/digest.sh@86 -- # false 00:20:56.809 12:48:55 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:56.809 12:48:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:57.067 12:48:56 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.067 12:48:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.633 nvme0n1 00:20:57.633 12:48:56 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:57.633 12:48:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:57.633 Running I/O for 2 seconds... 00:21:00.162 00:21:00.162 Latency(us) 00:21:00.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.162 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:00.162 nvme0n1 : 2.00 19506.63 76.20 0.00 0.00 6551.19 2269.49 15534.46 00:21:00.162 =================================================================================================================== 00:21:00.162 Total : 19506.63 76.20 0.00 0.00 6551.19 2269.49 15534.46 00:21:00.162 0 00:21:00.162 12:48:58 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:00.162 12:48:58 -- host/digest.sh@93 -- # get_accel_stats 00:21:00.162 12:48:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:00.162 12:48:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:00.162 12:48:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:00.162 | select(.opcode=="crc32c") 00:21:00.162 | "\(.module_name) \(.executed)"' 00:21:00.162 12:48:58 -- host/digest.sh@94 -- # false 00:21:00.162 12:48:58 -- host/digest.sh@94 -- # exp_module=software 00:21:00.162 12:48:58 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:00.162 12:48:58 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:00.162 12:48:58 -- host/digest.sh@98 -- # killprocess 1259718 00:21:00.162 12:48:58 -- common/autotest_common.sh@936 -- # '[' -z 1259718 ']' 00:21:00.162 12:48:58 -- common/autotest_common.sh@940 -- # kill -0 1259718 00:21:00.162 12:48:58 -- common/autotest_common.sh@941 -- # uname 00:21:00.162 12:48:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.162 12:48:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1259718 00:21:00.162 12:48:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:00.162 12:48:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:00.162 12:48:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1259718' 00:21:00.162 killing process with pid 1259718 00:21:00.162 12:48:58 -- common/autotest_common.sh@955 -- # kill 1259718 00:21:00.162 Received shutdown signal, test time was about 2.000000 seconds 00:21:00.162 00:21:00.162 Latency(us) 00:21:00.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.162 =================================================================================================================== 00:21:00.162 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.162 12:48:58 -- common/autotest_common.sh@960 -- # wait 1259718 00:21:00.421 12:48:59 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:00.421 12:48:59 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:00.421 12:48:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:00.421 12:48:59 -- host/digest.sh@80 -- # rw=randwrite 00:21:00.421 12:48:59 -- host/digest.sh@80 -- # bs=131072 00:21:00.421 12:48:59 -- host/digest.sh@80 -- # qd=16 00:21:00.421 12:48:59 -- host/digest.sh@80 -- # scan_dsa=false 00:21:00.421 12:48:59 -- host/digest.sh@83 -- # bperfpid=1260253 00:21:00.421 12:48:59 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:00.421 12:48:59 -- host/digest.sh@84 -- # waitforlisten 1260253 /var/tmp/bperf.sock 00:21:00.421 12:48:59 -- common/autotest_common.sh@817 -- # '[' -z 1260253 ']' 00:21:00.421 12:48:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:00.421 12:48:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.421 12:48:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:00.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:00.421 12:48:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.421 12:48:59 -- common/autotest_common.sh@10 -- # set +x 00:21:00.421 [2024-04-16 12:48:59.320748] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:00.421 [2024-04-16 12:48:59.320824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260253 ] 00:21:00.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:00.421 Zero copy mechanism will not be used. 00:21:00.421 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.421 [2024-04-16 12:48:59.392498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.679 [2024-04-16 12:48:59.507382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.245 12:49:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.245 12:49:00 -- common/autotest_common.sh@850 -- # return 0 00:21:01.245 12:49:00 -- host/digest.sh@86 -- # false 00:21:01.245 12:49:00 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:01.245 12:49:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:01.812 12:49:00 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.812 12:49:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:02.070 nvme0n1 00:21:02.070 12:49:00 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:02.070 12:49:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:02.070 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:02.070 Zero copy mechanism will not be used. 00:21:02.070 Running I/O for 2 seconds... 00:21:04.597 00:21:04.597 Latency(us) 00:21:04.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.598 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:04.598 nvme0n1 : 2.00 3511.25 438.91 0.00 0.00 4546.90 3470.98 12913.02 00:21:04.598 =================================================================================================================== 00:21:04.598 Total : 3511.25 438.91 0.00 0.00 4546.90 3470.98 12913.02 00:21:04.598 0 00:21:04.598 12:49:03 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:04.598 12:49:03 -- host/digest.sh@93 -- # get_accel_stats 00:21:04.598 12:49:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:04.598 12:49:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:04.598 | select(.opcode=="crc32c") 00:21:04.598 | "\(.module_name) \(.executed)"' 00:21:04.598 12:49:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:04.598 12:49:03 -- host/digest.sh@94 -- # false 00:21:04.598 12:49:03 -- host/digest.sh@94 -- # exp_module=software 00:21:04.598 12:49:03 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:04.598 12:49:03 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:04.598 12:49:03 -- host/digest.sh@98 -- # killprocess 1260253 00:21:04.598 12:49:03 -- common/autotest_common.sh@936 -- # '[' -z 1260253 ']' 00:21:04.598 12:49:03 -- common/autotest_common.sh@940 -- # kill -0 1260253 00:21:04.598 12:49:03 -- common/autotest_common.sh@941 -- # uname 00:21:04.598 12:49:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.598 12:49:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1260253 00:21:04.598 12:49:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:04.598 12:49:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:04.598 12:49:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1260253' 00:21:04.598 killing process with pid 1260253 00:21:04.598 12:49:03 -- common/autotest_common.sh@955 -- # kill 1260253 00:21:04.598 Received shutdown signal, test time was about 2.000000 seconds 00:21:04.598 00:21:04.598 Latency(us) 00:21:04.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.598 =================================================================================================================== 00:21:04.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.598 12:49:03 -- common/autotest_common.sh@960 -- # wait 1260253 00:21:04.598 12:49:03 -- host/digest.sh@132 -- # killprocess 1258581 00:21:04.598 12:49:03 -- common/autotest_common.sh@936 -- # '[' -z 1258581 ']' 00:21:04.598 12:49:03 -- common/autotest_common.sh@940 -- # kill -0 1258581 00:21:04.598 12:49:03 -- common/autotest_common.sh@941 -- # uname 00:21:04.598 12:49:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.598 12:49:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1258581 00:21:04.598 12:49:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:04.598 12:49:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:04.598 12:49:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1258581' 00:21:04.598 killing process with pid 1258581 00:21:04.598 12:49:03 -- common/autotest_common.sh@955 -- # kill 1258581 00:21:04.598 12:49:03 -- common/autotest_common.sh@960 -- # wait 1258581 00:21:05.165 00:21:05.165 real 0m18.657s 00:21:05.165 user 0m37.500s 00:21:05.165 sys 0m4.774s 00:21:05.165 12:49:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:05.165 12:49:03 -- common/autotest_common.sh@10 -- # set +x 00:21:05.166 ************************************ 00:21:05.166 END TEST nvmf_digest_clean 00:21:05.166 ************************************ 00:21:05.166 12:49:03 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:05.166 12:49:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:05.166 12:49:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.166 12:49:03 -- common/autotest_common.sh@10 -- # set +x 00:21:05.166 ************************************ 00:21:05.166 START TEST nvmf_digest_error 00:21:05.166 ************************************ 00:21:05.166 12:49:04 -- common/autotest_common.sh@1111 -- # run_digest_error 00:21:05.166 12:49:04 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:05.166 12:49:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:05.166 12:49:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:05.166 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:05.166 12:49:04 -- nvmf/common.sh@470 -- # nvmfpid=1260828 00:21:05.166 12:49:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:05.166 12:49:04 -- nvmf/common.sh@471 -- # waitforlisten 1260828 00:21:05.166 12:49:04 -- common/autotest_common.sh@817 -- # '[' -z 1260828 ']' 00:21:05.166 12:49:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.166 12:49:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:05.166 12:49:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.166 12:49:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:05.166 12:49:04 -- common/autotest_common.sh@10 -- # set +x 00:21:05.166 [2024-04-16 12:49:04.113422] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:05.166 [2024-04-16 12:49:04.113510] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.166 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.166 [2024-04-16 12:49:04.186514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.424 [2024-04-16 12:49:04.291744] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.424 [2024-04-16 12:49:04.291802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.424 [2024-04-16 12:49:04.291825] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.424 [2024-04-16 12:49:04.291838] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.424 [2024-04-16 12:49:04.291848] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.424 [2024-04-16 12:49:04.291890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.357 12:49:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:06.357 12:49:05 -- common/autotest_common.sh@850 -- # return 0 00:21:06.357 12:49:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:06.357 12:49:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:06.357 12:49:05 -- common/autotest_common.sh@10 -- # set +x 00:21:06.357 12:49:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.357 12:49:05 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:06.357 12:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.357 12:49:05 -- common/autotest_common.sh@10 -- # set +x 00:21:06.357 [2024-04-16 12:49:05.118463] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:06.357 12:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.357 12:49:05 -- host/digest.sh@105 -- # common_target_config 00:21:06.357 12:49:05 -- host/digest.sh@43 -- # rpc_cmd 00:21:06.357 12:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.357 12:49:05 -- common/autotest_common.sh@10 -- # set +x 00:21:06.357 null0 00:21:06.357 [2024-04-16 12:49:05.241739] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.357 [2024-04-16 12:49:05.266000] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.357 12:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.357 12:49:05 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:06.357 12:49:05 -- host/digest.sh@54 -- # local rw bs qd 00:21:06.357 12:49:05 -- host/digest.sh@56 -- # rw=randread 00:21:06.357 12:49:05 -- host/digest.sh@56 -- # bs=4096 00:21:06.357 12:49:05 -- host/digest.sh@56 -- # qd=128 00:21:06.357 12:49:05 -- host/digest.sh@58 -- # bperfpid=1260978 00:21:06.357 12:49:05 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:06.357 12:49:05 -- host/digest.sh@60 -- # waitforlisten 1260978 /var/tmp/bperf.sock 00:21:06.357 12:49:05 -- common/autotest_common.sh@817 -- # '[' -z 1260978 ']' 00:21:06.357 12:49:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:06.357 12:49:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:06.357 12:49:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:06.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:06.357 12:49:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:06.357 12:49:05 -- common/autotest_common.sh@10 -- # set +x 00:21:06.357 [2024-04-16 12:49:05.316029] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:06.357 [2024-04-16 12:49:05.316105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260978 ] 00:21:06.357 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.357 [2024-04-16 12:49:05.393772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.614 [2024-04-16 12:49:05.511738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.614 12:49:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:06.614 12:49:05 -- common/autotest_common.sh@850 -- # return 0 00:21:06.614 12:49:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:06.614 12:49:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:06.871 12:49:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:06.871 12:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.871 12:49:05 -- common/autotest_common.sh@10 -- # set +x 00:21:06.871 12:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.871 12:49:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:06.871 12:49:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:07.436 nvme0n1 00:21:07.436 12:49:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:07.436 12:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.436 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:21:07.436 12:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.436 12:49:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:07.436 12:49:06 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:07.436 Running I/O for 2 seconds... 00:21:07.436 [2024-04-16 12:49:06.363578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.436 [2024-04-16 12:49:06.363628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.436 [2024-04-16 12:49:06.363648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.436 [2024-04-16 12:49:06.376209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.436 [2024-04-16 12:49:06.376250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.436 [2024-04-16 12:49:06.376267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.436 [2024-04-16 12:49:06.390052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.436 [2024-04-16 12:49:06.390081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.436 [2024-04-16 12:49:06.390097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.436 [2024-04-16 12:49:06.402737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.436 [2024-04-16 12:49:06.402769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.436 [2024-04-16 12:49:06.402787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.436 [2024-04-16 12:49:06.413540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.436 [2024-04-16 12:49:06.413595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.436 [2024-04-16 12:49:06.413627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.436 [2024-04-16 12:49:06.426452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.436 [2024-04-16 12:49:06.426481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.436 [2024-04-16 12:49:06.426497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.436 [2024-04-16 12:49:06.439226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.436 [2024-04-16 12:49:06.439255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.436 [2024-04-16 12:49:06.439271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.437 [2024-04-16 12:49:06.451684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.437 [2024-04-16 12:49:06.451729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.437 [2024-04-16 12:49:06.451747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.437 [2024-04-16 12:49:06.463522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.437 [2024-04-16 12:49:06.463572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.437 [2024-04-16 12:49:06.463592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.437 [2024-04-16 12:49:06.476495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.437 [2024-04-16 12:49:06.476524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.437 [2024-04-16 12:49:06.476555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.437 [2024-04-16 12:49:06.488756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.437 [2024-04-16 12:49:06.488785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.437 [2024-04-16 12:49:06.488802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.437 [2024-04-16 12:49:06.499291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.437 [2024-04-16 12:49:06.499320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.437 [2024-04-16 12:49:06.499336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.513428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.513458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.513483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.524710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.524740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.524757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.536735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.536766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.536783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.549606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.549636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.549654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.561334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.561363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.561379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.574353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.574382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.574399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.586429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.586459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.586475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.598969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.598999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.599030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.610304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.610339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.610358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.625793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.625833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.625868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.640277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.640312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.640331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.652520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.652554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.652584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.667493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.667528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.667547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.682433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.682467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.682487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.694762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.694793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.694810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.710529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.710573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.710596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.725345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.725380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.725401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.738719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.738748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.738764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.750976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.751006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.751023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.696 [2024-04-16 12:49:06.762827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.696 [2024-04-16 12:49:06.762876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.696 [2024-04-16 12:49:06.762897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.778788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.778821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.778843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.794526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.794561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.794590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.806721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.806754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.806772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.824416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.824449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.824469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.836637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.836666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.836698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.852371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.852406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.852425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.867435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.867476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.867496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.879850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.879892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.879908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.894247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.894282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.894301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.907793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.907827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.907843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.920018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.920051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.920070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.936228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.936261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.936280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.947945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.947978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.947997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.962073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.962106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.962125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.976839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.976866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.976882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:06.990755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:06.990783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:06.990813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:07.003088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:07.003122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:07.003141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.955 [2024-04-16 12:49:07.018509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:07.955 [2024-04-16 12:49:07.018542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.955 [2024-04-16 12:49:07.018570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.030014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.030048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.030067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.045010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.045044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.045063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.059389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.059422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.059442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.071425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.071457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.071475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.086400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.086433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.086452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.098912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.098945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.098969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.112068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.112101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.112121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.126317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.126350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.126369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.139938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.139971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.139990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.154462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.154494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.154514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.168904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.168947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.168966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.182743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.182770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.182801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.196202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.196235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.196253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.208857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.208905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.208923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.222371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.222410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.222430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.238818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.238846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.238878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.251666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.251694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.251726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.266143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.266176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.266195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.214 [2024-04-16 12:49:07.281132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.214 [2024-04-16 12:49:07.281165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.214 [2024-04-16 12:49:07.281184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.294018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.294052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.294071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.307466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.307499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.307518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.321265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.321298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.321317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.335197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.335229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.335248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.349188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.349221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.349240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.361517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.361550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.361577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.376150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.376183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.376203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.388729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.388756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.388788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.403663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.403691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.403722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.419803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.419841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.419860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.433297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.433333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.433352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.448519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.448553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.448585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.462589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.462643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.462660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.476490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.476524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.476543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.489885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.489918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.489938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.502782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.502817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.502848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.517431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.517464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.517483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.473 [2024-04-16 12:49:07.529681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.473 [2024-04-16 12:49:07.529708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.473 [2024-04-16 12:49:07.529739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.544583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.544629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.544646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.558291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.558325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.558344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.569733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.569760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.569791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.585074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.585107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.585126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.599394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.599429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.599448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.612702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.612731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.612762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.624262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.624290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.624322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.639623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.639653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.639670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.649652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.649681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.649713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.661947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.661975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.662006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.674172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.674200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.674236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.688651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.688680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.688719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.700340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.700369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.700401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.712107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.712135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.712166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.724021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.724049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.724081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.736026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.736054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.736085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.747491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.747518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.732 [2024-04-16 12:49:07.747550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.732 [2024-04-16 12:49:07.759715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.732 [2024-04-16 12:49:07.759743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.733 [2024-04-16 12:49:07.759776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.733 [2024-04-16 12:49:07.773771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.733 [2024-04-16 12:49:07.773799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.733 [2024-04-16 12:49:07.773830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.733 [2024-04-16 12:49:07.784222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.733 [2024-04-16 12:49:07.784250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.733 [2024-04-16 12:49:07.784282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.733 [2024-04-16 12:49:07.797925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.733 [2024-04-16 12:49:07.797958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.733 [2024-04-16 12:49:07.797990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.810973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.811001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.811032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.822080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.822108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.822140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.834303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.834332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.834362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.845929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.845957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.845989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.859509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.859539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.859578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.870734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.870763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.870796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.884028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.884056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.884086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.894416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.894444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.894475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.907487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.907514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.907545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.919296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.919324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.919355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.932402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.932430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.932462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.944304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.944332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.944364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.956542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.956592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.956611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.969318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.969346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.969377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.980988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.981016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.981047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:07.991291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:07.991318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:07.991350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:08.004004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:08.004032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:08.004069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:08.014709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:08.014738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:08.014770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:08.028026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:08.028054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:08.028086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:08.040796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:08.040826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:08.040859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.992 [2024-04-16 12:49:08.053798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:08.992 [2024-04-16 12:49:08.053826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.992 [2024-04-16 12:49:08.053866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.065333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.065361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.065393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.077851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.077894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.077910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.090773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.090800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.090833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.102371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.102398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.102438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.114698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.114726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.114758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.125271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.125299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.125331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.137815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.137843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.137874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.149670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.149699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.149732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.161751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.161779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.161811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.174168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.174196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.174227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.186488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.186515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.186546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.199786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.199814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.199847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.210308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.210335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.210373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.223320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.223348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.223379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.236304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.236332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.236363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.247402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.247429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.247461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.259491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.259518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.259549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.272457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.272484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.272516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.284189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.284216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.284248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.294666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.251 [2024-04-16 12:49:08.294694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.251 [2024-04-16 12:49:08.294726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.251 [2024-04-16 12:49:08.308734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.252 [2024-04-16 12:49:08.308763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.252 [2024-04-16 12:49:08.308795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.510 [2024-04-16 12:49:08.320916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.510 [2024-04-16 12:49:08.320950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.510 [2024-04-16 12:49:08.320983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.510 [2024-04-16 12:49:08.332991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.510 [2024-04-16 12:49:08.333018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.510 [2024-04-16 12:49:08.333049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.510 [2024-04-16 12:49:08.344909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a1490) 00:21:09.510 [2024-04-16 12:49:08.344936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.510 [2024-04-16 12:49:08.344968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.510 00:21:09.510 Latency(us) 00:21:09.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.510 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:09.510 nvme0n1 : 2.00 19569.93 76.45 0.00 0.00 6533.23 2888.44 19612.25 00:21:09.510 =================================================================================================================== 00:21:09.510 Total : 19569.93 76.45 0.00 0.00 6533.23 2888.44 19612.25 00:21:09.510 0 00:21:09.510 12:49:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:09.510 12:49:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:09.510 12:49:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:09.510 | .driver_specific 00:21:09.510 | .nvme_error 00:21:09.510 | .status_code 00:21:09.510 | .command_transient_transport_error' 00:21:09.510 12:49:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:09.767 12:49:08 -- host/digest.sh@71 -- # (( 153 > 0 )) 00:21:09.767 12:49:08 -- host/digest.sh@73 -- # killprocess 1260978 00:21:09.767 12:49:08 -- common/autotest_common.sh@936 -- # '[' -z 1260978 ']' 00:21:09.767 12:49:08 -- common/autotest_common.sh@940 -- # kill -0 1260978 00:21:09.768 12:49:08 -- common/autotest_common.sh@941 -- # uname 00:21:09.768 12:49:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:09.768 12:49:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1260978 00:21:09.768 12:49:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:09.768 12:49:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:09.768 12:49:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1260978' 00:21:09.768 killing process with pid 1260978 00:21:09.768 12:49:08 -- common/autotest_common.sh@955 -- # kill 1260978 00:21:09.768 Received shutdown signal, test time was about 2.000000 seconds 00:21:09.768 00:21:09.768 Latency(us) 00:21:09.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.768 =================================================================================================================== 00:21:09.768 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.768 12:49:08 -- common/autotest_common.sh@960 -- # wait 1260978 00:21:10.026 12:49:08 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:10.026 12:49:08 -- host/digest.sh@54 -- # local rw bs qd 00:21:10.026 12:49:08 -- host/digest.sh@56 -- # rw=randread 00:21:10.026 12:49:08 -- host/digest.sh@56 -- # bs=131072 00:21:10.026 12:49:08 -- host/digest.sh@56 -- # qd=16 00:21:10.026 12:49:08 -- host/digest.sh@58 -- # bperfpid=1261388 00:21:10.026 12:49:08 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:10.026 12:49:08 -- host/digest.sh@60 -- # waitforlisten 1261388 /var/tmp/bperf.sock 00:21:10.026 12:49:08 -- common/autotest_common.sh@817 -- # '[' -z 1261388 ']' 00:21:10.026 12:49:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:10.026 12:49:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:10.026 12:49:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:10.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:10.026 12:49:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:10.026 12:49:08 -- common/autotest_common.sh@10 -- # set +x 00:21:10.026 [2024-04-16 12:49:08.962995] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:10.026 [2024-04-16 12:49:08.963069] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261388 ] 00:21:10.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:10.026 Zero copy mechanism will not be used. 00:21:10.026 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.026 [2024-04-16 12:49:09.038620] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.284 [2024-04-16 12:49:09.157303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.284 12:49:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:10.284 12:49:09 -- common/autotest_common.sh@850 -- # return 0 00:21:10.284 12:49:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:10.284 12:49:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:10.543 12:49:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:10.543 12:49:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.543 12:49:09 -- common/autotest_common.sh@10 -- # set +x 00:21:10.543 12:49:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.543 12:49:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:10.543 12:49:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:11.108 nvme0n1 00:21:11.108 12:49:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:11.108 12:49:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.108 12:49:09 -- common/autotest_common.sh@10 -- # set +x 00:21:11.108 12:49:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.108 12:49:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:11.108 12:49:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:11.108 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:11.108 Zero copy mechanism will not be used. 00:21:11.108 Running I/O for 2 seconds... 00:21:11.108 [2024-04-16 12:49:10.130167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.108 [2024-04-16 12:49:10.130237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.108 [2024-04-16 12:49:10.130257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.108 [2024-04-16 12:49:10.140832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.108 [2024-04-16 12:49:10.140881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.108 [2024-04-16 12:49:10.140899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.108 [2024-04-16 12:49:10.151841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.108 [2024-04-16 12:49:10.151873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.108 [2024-04-16 12:49:10.151906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.108 [2024-04-16 12:49:10.163819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.108 [2024-04-16 12:49:10.163849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.108 [2024-04-16 12:49:10.163884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.108 [2024-04-16 12:49:10.175536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.108 [2024-04-16 12:49:10.175592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.108 [2024-04-16 12:49:10.175612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.187662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.187693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.187727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.198367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.198402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.198434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.209199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.209229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.209261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.220086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.220116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.220149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.231697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.231728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.231762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.243019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.243058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.243100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.253520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.253572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.253592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.263090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.263119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.263151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.272454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.272483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.272515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.282086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.366 [2024-04-16 12:49:10.282115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.366 [2024-04-16 12:49:10.282148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.366 [2024-04-16 12:49:10.292086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.292121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.292141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.302631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.302661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.302693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.314710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.314740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.314772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.325200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.325234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.325253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.335627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.335656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.335688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.345793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.345822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.345854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.356507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.356541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.356560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.366705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.366732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.366764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.377131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.377166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.377185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.387657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.387685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.387716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.397951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.397980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.397996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.409462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.409496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.409516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.420844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.420893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.420918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.367 [2024-04-16 12:49:10.432840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.367 [2024-04-16 12:49:10.432870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.367 [2024-04-16 12:49:10.432887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.445969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.446005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.446025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.458836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.458880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.458896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.472458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.472494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.472514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.485551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.485609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.485625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.498768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.498796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.498827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.511376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.511409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.511428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.524449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.524483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.524502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.537425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.537469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.537488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.550920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.550972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.550991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.564762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.564789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.564820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.578561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.578616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.578632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.592234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.592267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.592286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.605869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.605902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.605921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.619655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.619682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.619713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.633346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.633379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.633397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.644494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.644528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.644546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.656278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.656311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.656330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.668018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.668052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.668070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.679014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.679048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.679066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.625 [2024-04-16 12:49:10.691382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.625 [2024-04-16 12:49:10.691415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.625 [2024-04-16 12:49:10.691434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.883 [2024-04-16 12:49:10.704439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.883 [2024-04-16 12:49:10.704472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-04-16 12:49:10.704491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.883 [2024-04-16 12:49:10.717478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.883 [2024-04-16 12:49:10.717510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.883 [2024-04-16 12:49:10.717529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.730635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.730662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.730693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.743967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.744001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.744020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.757442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.757475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.757500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.770485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.770517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.770536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.784103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.784137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.784156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.797845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.797873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.797888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.811473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.811506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.811524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.824940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.824981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.825000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.838515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.838552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.838584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.850993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.851026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.851045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.864659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.864686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.864718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.878618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.878650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.878682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.892839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.892883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.892902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.906859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.906894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.906925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.921131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.921164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.921183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.934206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.934240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.934259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.884 [2024-04-16 12:49:10.948104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:11.884 [2024-04-16 12:49:10.948147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.884 [2024-04-16 12:49:10.948164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:10.962438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:10.962471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:10.962490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:10.976276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:10.976309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:10.976327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:10.990454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:10.990487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:10.990511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.004212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.004248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.004267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.017841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.017882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.017897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.031467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.031500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.031519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.045174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.045207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.045226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.058700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.058730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.058761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.071995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.072028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.072047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.085672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.085699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.085729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.099489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.099522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.099540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.113324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.113361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.113381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.126752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.126778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.126809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.140483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.140516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.140535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.153895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.153939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.153958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.167553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.167610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.142 [2024-04-16 12:49:11.167626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.142 [2024-04-16 12:49:11.181143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.142 [2024-04-16 12:49:11.181175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.143 [2024-04-16 12:49:11.181194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.143 [2024-04-16 12:49:11.194579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.143 [2024-04-16 12:49:11.194623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.143 [2024-04-16 12:49:11.194639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.143 [2024-04-16 12:49:11.208211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.143 [2024-04-16 12:49:11.208243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.143 [2024-04-16 12:49:11.208262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.221834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.221861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.221902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.234592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.234639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.234655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.245367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.245400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.245419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.255068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.255101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.255121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.265685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.265713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.265745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.275729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.275757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.275788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.286753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.286781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.286811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.297423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.297457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.297477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.307166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.307201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.307220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.317006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.317039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.317064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.326737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.326768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.326785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.336620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.336652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.336670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.346049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.346077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.346108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.355639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.355669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.355686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.365152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.365179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.365211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.375804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.375836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.375868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.385852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.385886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.385904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.401 [2024-04-16 12:49:11.396295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.401 [2024-04-16 12:49:11.396329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.401 [2024-04-16 12:49:11.396348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.402 [2024-04-16 12:49:11.406641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.402 [2024-04-16 12:49:11.406679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.402 [2024-04-16 12:49:11.406697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.402 [2024-04-16 12:49:11.416674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.402 [2024-04-16 12:49:11.416704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.402 [2024-04-16 12:49:11.416721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.402 [2024-04-16 12:49:11.426414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.402 [2024-04-16 12:49:11.426447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.402 [2024-04-16 12:49:11.426476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.402 [2024-04-16 12:49:11.435982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.402 [2024-04-16 12:49:11.436014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.402 [2024-04-16 12:49:11.436033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.402 [2024-04-16 12:49:11.445415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.402 [2024-04-16 12:49:11.445447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.402 [2024-04-16 12:49:11.445467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.402 [2024-04-16 12:49:11.454930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.402 [2024-04-16 12:49:11.454963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.402 [2024-04-16 12:49:11.454982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.402 [2024-04-16 12:49:11.464462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.402 [2024-04-16 12:49:11.464495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.402 [2024-04-16 12:49:11.464522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.474166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.474200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.474219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.484906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.484955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.484976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.495278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.495311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.495332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.504639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.504668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.504685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.514607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.514639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.514655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.523851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.523879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.523894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.533228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.533261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.533280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.542392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.542432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.542451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.551681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.551708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.551740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.561138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.561170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.561189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.570525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.570557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.570593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.579807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.579845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.579860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.589048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.589089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.589108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.598360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.598392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.598411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.607438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.607470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.607499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.616620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.662 [2024-04-16 12:49:11.616646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.662 [2024-04-16 12:49:11.616677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.662 [2024-04-16 12:49:11.626093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.626125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.626143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.635312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.635344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.635363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.644763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.644791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.644823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.654692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.654721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.654738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.663931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.663964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.663983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.673398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.673431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.673450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.683430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.683472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.683491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.694818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.694862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.694878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.704465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.704499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.704517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.713818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.713845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.713877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.663 [2024-04-16 12:49:11.723213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.663 [2024-04-16 12:49:11.723257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.663 [2024-04-16 12:49:11.723275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.921 [2024-04-16 12:49:11.732510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.921 [2024-04-16 12:49:11.732543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.921 [2024-04-16 12:49:11.732587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.741987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.742020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.742039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.751085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.751118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.751136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.760301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.760334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.760362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.769853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.769881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.769898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.779006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.779040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.779064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.787669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.787706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.787737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.796962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.796996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.797014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.806096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.806129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.806148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.815267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.815305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.815324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.824654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.824680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.824712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.833870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.833896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.833911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.843217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.843259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.843278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.852546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.852588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.852608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.861969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.862002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.862020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.871152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.871184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.871203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.880427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.880459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.880478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.889690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.889716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.889746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.898971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.899004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.899033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.908283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.908315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.908334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.917541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.917582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.917602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.926649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.926679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.926695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.935886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.935919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.935938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.945327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.945359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.945378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.954762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.954790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.954822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.964206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.964239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.964257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.922 [2024-04-16 12:49:11.973626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.922 [2024-04-16 12:49:11.973665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.922 [2024-04-16 12:49:11.973704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.923 [2024-04-16 12:49:11.982929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:12.923 [2024-04-16 12:49:11.982962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.923 [2024-04-16 12:49:11.982981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:11.992207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:11.992241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:11.992260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.001472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.001505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.001524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.010825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.010852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.010885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.019944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.019977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.019996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.029266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.029299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.029318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.038398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.038431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.038450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.047699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.047726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.047763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.056831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.056883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.056902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.066113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.066146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.066164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.075240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.075272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.075290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.084593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.084634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.084649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.093745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.093771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.093802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.181 [2024-04-16 12:49:12.102873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.181 [2024-04-16 12:49:12.102907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.181 [2024-04-16 12:49:12.102925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.182 [2024-04-16 12:49:12.112019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.182 [2024-04-16 12:49:12.112057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.182 [2024-04-16 12:49:12.112076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.182 [2024-04-16 12:49:12.121113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15462e0) 00:21:13.182 [2024-04-16 12:49:12.121155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.182 [2024-04-16 12:49:12.121174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.182 00:21:13.182 Latency(us) 00:21:13.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.182 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:13.182 nvme0n1 : 2.00 2826.40 353.30 0.00 0.00 5655.08 1401.74 14369.37 00:21:13.182 =================================================================================================================== 00:21:13.182 Total : 2826.40 353.30 0.00 0.00 5655.08 1401.74 14369.37 00:21:13.182 0 00:21:13.182 12:49:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:13.182 12:49:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:13.182 12:49:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:13.182 12:49:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:13.182 | .driver_specific 00:21:13.182 | .nvme_error 00:21:13.182 | .status_code 00:21:13.182 | .command_transient_transport_error' 00:21:13.441 12:49:12 -- host/digest.sh@71 -- # (( 182 > 0 )) 00:21:13.441 12:49:12 -- host/digest.sh@73 -- # killprocess 1261388 00:21:13.441 12:49:12 -- common/autotest_common.sh@936 -- # '[' -z 1261388 ']' 00:21:13.441 12:49:12 -- common/autotest_common.sh@940 -- # kill -0 1261388 00:21:13.441 12:49:12 -- common/autotest_common.sh@941 -- # uname 00:21:13.441 12:49:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.441 12:49:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1261388 00:21:13.441 12:49:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:13.441 12:49:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:13.441 12:49:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1261388' 00:21:13.441 killing process with pid 1261388 00:21:13.441 12:49:12 -- common/autotest_common.sh@955 -- # kill 1261388 00:21:13.441 Received shutdown signal, test time was about 2.000000 seconds 00:21:13.441 00:21:13.441 Latency(us) 00:21:13.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.441 =================================================================================================================== 00:21:13.441 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.441 12:49:12 -- common/autotest_common.sh@960 -- # wait 1261388 00:21:13.699 12:49:12 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:13.699 12:49:12 -- host/digest.sh@54 -- # local rw bs qd 00:21:13.699 12:49:12 -- host/digest.sh@56 -- # rw=randwrite 00:21:13.699 12:49:12 -- host/digest.sh@56 -- # bs=4096 00:21:13.699 12:49:12 -- host/digest.sh@56 -- # qd=128 00:21:13.699 12:49:12 -- host/digest.sh@58 -- # bperfpid=1261919 00:21:13.699 12:49:12 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:13.699 12:49:12 -- host/digest.sh@60 -- # waitforlisten 1261919 /var/tmp/bperf.sock 00:21:13.699 12:49:12 -- common/autotest_common.sh@817 -- # '[' -z 1261919 ']' 00:21:13.699 12:49:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:13.699 12:49:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:13.699 12:49:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:13.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:13.699 12:49:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:13.699 12:49:12 -- common/autotest_common.sh@10 -- # set +x 00:21:13.699 [2024-04-16 12:49:12.713387] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:13.699 [2024-04-16 12:49:12.713461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261919 ] 00:21:13.699 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.957 [2024-04-16 12:49:12.784356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.957 [2024-04-16 12:49:12.896455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.890 12:49:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:14.890 12:49:13 -- common/autotest_common.sh@850 -- # return 0 00:21:14.890 12:49:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:14.890 12:49:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:14.890 12:49:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:14.890 12:49:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.890 12:49:13 -- common/autotest_common.sh@10 -- # set +x 00:21:14.890 12:49:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.890 12:49:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.890 12:49:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.456 nvme0n1 00:21:15.456 12:49:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:15.456 12:49:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.456 12:49:14 -- common/autotest_common.sh@10 -- # set +x 00:21:15.456 12:49:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.456 12:49:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:15.456 12:49:14 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:15.714 Running I/O for 2 seconds... 00:21:15.714 [2024-04-16 12:49:14.591852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190edd58 00:21:15.714 [2024-04-16 12:49:14.593072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.593116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.604317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ee190 00:21:15.714 [2024-04-16 12:49:14.605335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.605367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.617073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ff3c8 00:21:15.714 [2024-04-16 12:49:14.618059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.618091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.630974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e9168 00:21:15.714 [2024-04-16 12:49:14.632143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.632175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.644744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fd640 00:21:15.714 [2024-04-16 12:49:14.646080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.646111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.658486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fd208 00:21:15.714 [2024-04-16 12:49:14.659962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.659994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.672173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190eb328 00:21:15.714 [2024-04-16 12:49:14.673790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.673817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.685755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e9168 00:21:15.714 [2024-04-16 12:49:14.687611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.687640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.699771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e1710 00:21:15.714 [2024-04-16 12:49:14.701774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.701803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.713690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e6fa8 00:21:15.714 [2024-04-16 12:49:14.715897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.715929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.723145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f6cc8 00:21:15.714 [2024-04-16 12:49:14.724093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.724124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.736967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e5a90 00:21:15.714 [2024-04-16 12:49:14.738076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.738109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.750125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e38d0 00:21:15.714 [2024-04-16 12:49:14.751218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.751249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.763149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e27f0 00:21:15.714 [2024-04-16 12:49:14.764272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.764303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:15.714 [2024-04-16 12:49:14.775317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e7c50 00:21:15.714 [2024-04-16 12:49:14.776395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.714 [2024-04-16 12:49:14.776432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.789093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190df988 00:21:15.973 [2024-04-16 12:49:14.790356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.790387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.802617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190df550 00:21:15.973 [2024-04-16 12:49:14.804026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.804058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.816182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f6020 00:21:15.973 [2024-04-16 12:49:14.817791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.817828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.827857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e7c50 00:21:15.973 [2024-04-16 12:49:14.828700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.828727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.841806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f7100 00:21:15.973 [2024-04-16 12:49:14.843094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.843125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.854319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fe2e8 00:21:15.973 [2024-04-16 12:49:14.855582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.855625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.867964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e6738 00:21:15.973 [2024-04-16 12:49:14.869365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.869396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.881443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f2d80 00:21:15.973 [2024-04-16 12:49:14.882981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.883012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.894991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e88f8 00:21:15.973 [2024-04-16 12:49:14.896844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.896869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.907207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e1b48 00:21:15.973 [2024-04-16 12:49:14.908502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.908533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.920255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e2c28 00:21:15.973 [2024-04-16 12:49:14.921584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.921626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.933285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f9f68 00:21:15.973 [2024-04-16 12:49:14.934641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.934666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.946276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190eb760 00:21:15.973 [2024-04-16 12:49:14.947559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.947619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.959347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f0788 00:21:15.973 [2024-04-16 12:49:14.960644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.960669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.972424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fb048 00:21:15.973 [2024-04-16 12:49:14.973696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.973722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.985436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190edd58 00:21:15.973 [2024-04-16 12:49:14.986706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.973 [2024-04-16 12:49:14.986730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.973 [2024-04-16 12:49:14.998413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e8088 00:21:15.974 [2024-04-16 12:49:14.999685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.974 [2024-04-16 12:49:14.999710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.974 [2024-04-16 12:49:15.011433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e4140 00:21:15.974 [2024-04-16 12:49:15.012711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.974 [2024-04-16 12:49:15.012760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.974 [2024-04-16 12:49:15.024470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ebb98 00:21:15.974 [2024-04-16 12:49:15.025761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.974 [2024-04-16 12:49:15.025786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.974 [2024-04-16 12:49:15.037441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fda78 00:21:15.974 [2024-04-16 12:49:15.038765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.974 [2024-04-16 12:49:15.038791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:16.232 [2024-04-16 12:49:15.050582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f1430 00:21:16.232 [2024-04-16 12:49:15.051865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-04-16 12:49:15.051896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:16.232 [2024-04-16 12:49:15.065486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f2510 00:21:16.232 [2024-04-16 12:49:15.067360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.232 [2024-04-16 12:49:15.067386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:16.232 [2024-04-16 12:49:15.076019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fa7d8 00:21:16.233 [2024-04-16 12:49:15.076963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.076988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.086828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e0a68 00:21:16.233 [2024-04-16 12:49:15.088014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.088039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.099720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4298 00:21:16.233 [2024-04-16 12:49:15.101108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.101136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.111804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f5be8 00:21:16.233 [2024-04-16 12:49:15.113187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.113218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.123556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4298 00:21:16.233 [2024-04-16 12:49:15.124919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.124944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.135190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f5be8 00:21:16.233 [2024-04-16 12:49:15.136583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.136608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.146781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4298 00:21:16.233 [2024-04-16 12:49:15.148127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.148153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.158681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f5be8 00:21:16.233 [2024-04-16 12:49:15.160082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.160107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.170299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4298 00:21:16.233 [2024-04-16 12:49:15.171626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.171653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.181850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f5be8 00:21:16.233 [2024-04-16 12:49:15.183197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.183222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.193357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4298 00:21:16.233 [2024-04-16 12:49:15.194688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.194714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.204996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f5be8 00:21:16.233 [2024-04-16 12:49:15.206276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.206300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.216747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fd208 00:21:16.233 [2024-04-16 12:49:15.218107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.218132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.228212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e73e0 00:21:16.233 [2024-04-16 12:49:15.229574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.229599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.239725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190eb328 00:21:16.233 [2024-04-16 12:49:15.241110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.241135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.251240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190feb58 00:21:16.233 [2024-04-16 12:49:15.252586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.252611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.262807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f1868 00:21:16.233 [2024-04-16 12:49:15.264169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.264194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.274283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e3d08 00:21:16.233 [2024-04-16 12:49:15.275627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.275653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.285765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e5658 00:21:16.233 [2024-04-16 12:49:15.287138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.287163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.233 [2024-04-16 12:49:15.297325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190df988 00:21:16.233 [2024-04-16 12:49:15.298747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.233 [2024-04-16 12:49:15.298774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.492 [2024-04-16 12:49:15.309478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fa3a0 00:21:16.492 [2024-04-16 12:49:15.310844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.492 [2024-04-16 12:49:15.310883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.492 [2024-04-16 12:49:15.321019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fda78 00:21:16.492 [2024-04-16 12:49:15.322373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.492 [2024-04-16 12:49:15.322397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.492 [2024-04-16 12:49:15.332576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ebb98 00:21:16.492 [2024-04-16 12:49:15.333926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.333951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.344122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e4140 00:21:16.493 [2024-04-16 12:49:15.345477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.345501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.355677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4298 00:21:16.493 [2024-04-16 12:49:15.357046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.357072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.367679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ddc00 00:21:16.493 [2024-04-16 12:49:15.369018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.369043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.379371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f2d80 00:21:16.493 [2024-04-16 12:49:15.380718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.380744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.391060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f57b0 00:21:16.493 [2024-04-16 12:49:15.392401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.392426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.402533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f3a28 00:21:16.493 [2024-04-16 12:49:15.403905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.403930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.414099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fef90 00:21:16.493 [2024-04-16 12:49:15.415436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.415467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.425595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190eb760 00:21:16.493 [2024-04-16 12:49:15.426944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.426969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.437078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f0788 00:21:16.493 [2024-04-16 12:49:15.438420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.438445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.448572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f1430 00:21:16.493 [2024-04-16 12:49:15.449917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.449941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.460166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f2510 00:21:16.493 [2024-04-16 12:49:15.461515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.461539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.471684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e6300 00:21:16.493 [2024-04-16 12:49:15.473043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.473068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.483207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e5220 00:21:16.493 [2024-04-16 12:49:15.484569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.484595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.494707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4f40 00:21:16.493 [2024-04-16 12:49:15.496064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.496089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.506171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e0630 00:21:16.493 [2024-04-16 12:49:15.507513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.507538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.517687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fe720 00:21:16.493 [2024-04-16 12:49:15.519051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.519076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.529209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e4de8 00:21:16.493 [2024-04-16 12:49:15.530573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.530598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.540665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ed920 00:21:16.493 [2024-04-16 12:49:15.542016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.542041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.493 [2024-04-16 12:49:15.552173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190de8a8 00:21:16.493 [2024-04-16 12:49:15.553515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.493 [2024-04-16 12:49:15.553539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.752 [2024-04-16 12:49:15.564453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f31b8 00:21:16.752 [2024-04-16 12:49:15.565862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.752 [2024-04-16 12:49:15.565887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.752 [2024-04-16 12:49:15.575986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f7538 00:21:16.753 [2024-04-16 12:49:15.577327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.577352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.587466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f5be8 00:21:16.753 [2024-04-16 12:49:15.588829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.588868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.598967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fd208 00:21:16.753 [2024-04-16 12:49:15.600320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.600344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.610684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e73e0 00:21:16.753 [2024-04-16 12:49:15.612131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.612158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.622738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190eb328 00:21:16.753 [2024-04-16 12:49:15.624113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.624139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.634435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190feb58 00:21:16.753 [2024-04-16 12:49:15.635817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.635857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.646045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f1868 00:21:16.753 [2024-04-16 12:49:15.647397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.647422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.657681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e3d08 00:21:16.753 [2024-04-16 12:49:15.659043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.659068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.669202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e5658 00:21:16.753 [2024-04-16 12:49:15.670559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.670590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.680698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190df988 00:21:16.753 [2024-04-16 12:49:15.682058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.682083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.693754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fa3a0 00:21:16.753 [2024-04-16 12:49:15.695713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.695740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.702195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fd640 00:21:16.753 [2024-04-16 12:49:15.703073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.703108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.715141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ee190 00:21:16.753 [2024-04-16 12:49:15.716457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.716488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.727536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ed920 00:21:16.753 [2024-04-16 12:49:15.729065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.729090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.740403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4b08 00:21:16.753 [2024-04-16 12:49:15.742086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.742111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.750528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e3498 00:21:16.753 [2024-04-16 12:49:15.752396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.752421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.761427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f96f8 00:21:16.753 [2024-04-16 12:49:15.762443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.762467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.773024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e3498 00:21:16.753 [2024-04-16 12:49:15.774015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.774040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.784528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f96f8 00:21:16.753 [2024-04-16 12:49:15.785481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.785506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.796162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f0ff8 00:21:16.753 [2024-04-16 12:49:15.797129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.797155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.808146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e4578 00:21:16.753 [2024-04-16 12:49:15.809241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.753 [2024-04-16 12:49:15.809267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:16.753 [2024-04-16 12:49:15.820349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f46d0 00:21:17.012 [2024-04-16 12:49:15.821518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.012 [2024-04-16 12:49:15.821552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:17.012 [2024-04-16 12:49:15.832118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190de038 00:21:17.012 [2024-04-16 12:49:15.833178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.012 [2024-04-16 12:49:15.833204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:17.012 [2024-04-16 12:49:15.843524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e6fa8 00:21:17.012 [2024-04-16 12:49:15.844662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.012 [2024-04-16 12:49:15.844688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:17.012 [2024-04-16 12:49:15.854992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fac10 00:21:17.012 [2024-04-16 12:49:15.856179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.012 [2024-04-16 12:49:15.856204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:17.012 [2024-04-16 12:49:15.866798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f6020 00:21:17.012 [2024-04-16 12:49:15.867895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.012 [2024-04-16 12:49:15.867924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:17.012 [2024-04-16 12:49:15.878734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f7970 00:21:17.012 [2024-04-16 12:49:15.879799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.012 [2024-04-16 12:49:15.879827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.890385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e2c28 00:21:17.013 [2024-04-16 12:49:15.891557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.891598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.904250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fb480 00:21:17.013 [2024-04-16 12:49:15.905653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.905679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.917990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4b08 00:21:17.013 [2024-04-16 12:49:15.919557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.919610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.931679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f35f0 00:21:17.013 [2024-04-16 12:49:15.933398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.933430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.945271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e12d8 00:21:17.013 [2024-04-16 12:49:15.947195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.947226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.958801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f9f68 00:21:17.013 [2024-04-16 12:49:15.960866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.960897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.972259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e3d08 00:21:17.013 [2024-04-16 12:49:15.974513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.974543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.981458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e01f8 00:21:17.013 [2024-04-16 12:49:15.982528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.982558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:15.996544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190dfdc0 00:21:17.013 [2024-04-16 12:49:15.997854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:15.997879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:16.008617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f20d8 00:21:17.013 [2024-04-16 12:49:16.010691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:16.010716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:16.020044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fb048 00:21:17.013 [2024-04-16 12:49:16.021021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:16.021052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:16.033645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e49b0 00:21:17.013 [2024-04-16 12:49:16.034790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:16.034815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:16.047128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e95a0 00:21:17.013 [2024-04-16 12:49:16.048451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:16.048482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:16.060726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f7100 00:21:17.013 [2024-04-16 12:49:16.062305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:16.062336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:17.013 [2024-04-16 12:49:16.076379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190eff18 00:21:17.013 [2024-04-16 12:49:16.078703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.013 [2024-04-16 12:49:16.078731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:17.271 [2024-04-16 12:49:16.085829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ea248 00:21:17.271 [2024-04-16 12:49:16.086913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.271 [2024-04-16 12:49:16.086944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:17.271 [2024-04-16 12:49:16.100949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e7c50 00:21:17.271 [2024-04-16 12:49:16.102235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.271 [2024-04-16 12:49:16.102266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.271 [2024-04-16 12:49:16.112680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fb8b8 00:21:17.271 [2024-04-16 12:49:16.114062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.271 [2024-04-16 12:49:16.114092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:17.271 [2024-04-16 12:49:16.127211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f5378 00:21:17.271 [2024-04-16 12:49:16.128777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.271 [2024-04-16 12:49:16.128804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:17.271 [2024-04-16 12:49:16.140673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e73e0 00:21:17.271 [2024-04-16 12:49:16.142286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.142317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.153912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e49b0 00:21:17.272 [2024-04-16 12:49:16.155528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.155570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.167183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fa3a0 00:21:17.272 [2024-04-16 12:49:16.168826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.168851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.180225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fd208 00:21:17.272 [2024-04-16 12:49:16.181892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.181923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.193251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190df988 00:21:17.272 [2024-04-16 12:49:16.194917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.194948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.206412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f8618 00:21:17.272 [2024-04-16 12:49:16.208008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.208038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.219466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fc128 00:21:17.272 [2024-04-16 12:49:16.221054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.221085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.232429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190de470 00:21:17.272 [2024-04-16 12:49:16.234035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.234066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.245423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ed4e8 00:21:17.272 [2024-04-16 12:49:16.247037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.247068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.258508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ec840 00:21:17.272 [2024-04-16 12:49:16.260125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.260154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.271511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e9e10 00:21:17.272 [2024-04-16 12:49:16.273125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.273156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.284521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f31b8 00:21:17.272 [2024-04-16 12:49:16.286111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.286141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.297558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f2510 00:21:17.272 [2024-04-16 12:49:16.299183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.299214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.310587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fcdd0 00:21:17.272 [2024-04-16 12:49:16.312231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.312261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.323693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fb8b8 00:21:17.272 [2024-04-16 12:49:16.325303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.325334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.272 [2024-04-16 12:49:16.336742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f7970 00:21:17.272 [2024-04-16 12:49:16.338394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.272 [2024-04-16 12:49:16.338424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.349963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f6020 00:21:17.530 [2024-04-16 12:49:16.351579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.351621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.363021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ebfd0 00:21:17.530 [2024-04-16 12:49:16.364630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.364655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.376048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e0630 00:21:17.530 [2024-04-16 12:49:16.377657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.377681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.389357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ee190 00:21:17.530 [2024-04-16 12:49:16.390959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.390989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.402555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f4f40 00:21:17.530 [2024-04-16 12:49:16.404169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.404200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.415627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f81e0 00:21:17.530 [2024-04-16 12:49:16.417222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.417252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.428591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190eea00 00:21:17.530 [2024-04-16 12:49:16.430199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.430229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.441645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f0350 00:21:17.530 [2024-04-16 12:49:16.443241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.443271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.454650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f46d0 00:21:17.530 [2024-04-16 12:49:16.456257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.467729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e01f8 00:21:17.530 [2024-04-16 12:49:16.469331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.530 [2024-04-16 12:49:16.469362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.530 [2024-04-16 12:49:16.480769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e12d8 00:21:17.530 [2024-04-16 12:49:16.482402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.482432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 [2024-04-16 12:49:16.493765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ea248 00:21:17.531 [2024-04-16 12:49:16.495396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.495431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 [2024-04-16 12:49:16.506802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e95a0 00:21:17.531 [2024-04-16 12:49:16.508427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.508457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 [2024-04-16 12:49:16.519889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190fdeb0 00:21:17.531 [2024-04-16 12:49:16.521486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.521516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 [2024-04-16 12:49:16.532935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190ed0b0 00:21:17.531 [2024-04-16 12:49:16.534554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.534590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 [2024-04-16 12:49:16.545976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e6300 00:21:17.531 [2024-04-16 12:49:16.547598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.547642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 [2024-04-16 12:49:16.559103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190f5378 00:21:17.531 [2024-04-16 12:49:16.560719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.560744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 [2024-04-16 12:49:16.572121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e73e0 00:21:17.531 [2024-04-16 12:49:16.573711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.573736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 [2024-04-16 12:49:16.585114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d30e0) with pdu=0x2000190e49b0 00:21:17.531 [2024-04-16 12:49:16.586693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.531 [2024-04-16 12:49:16.586718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:17.531 00:21:17.531 Latency(us) 00:21:17.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.531 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:17.531 nvme0n1 : 2.01 20499.21 80.08 0.00 0.00 6233.55 2572.89 16505.36 00:21:17.531 =================================================================================================================== 00:21:17.531 Total : 20499.21 80.08 0.00 0.00 6233.55 2572.89 16505.36 00:21:17.531 0 00:21:17.788 12:49:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:17.788 12:49:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:17.788 12:49:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:17.788 | .driver_specific 00:21:17.788 | .nvme_error 00:21:17.788 | .status_code 00:21:17.788 | .command_transient_transport_error' 00:21:17.788 12:49:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:17.788 12:49:16 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:21:17.788 12:49:16 -- host/digest.sh@73 -- # killprocess 1261919 00:21:17.788 12:49:16 -- common/autotest_common.sh@936 -- # '[' -z 1261919 ']' 00:21:17.788 12:49:16 -- common/autotest_common.sh@940 -- # kill -0 1261919 00:21:17.788 12:49:16 -- common/autotest_common.sh@941 -- # uname 00:21:17.789 12:49:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:17.789 12:49:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1261919 00:21:18.046 12:49:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:18.046 12:49:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:18.046 12:49:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1261919' 00:21:18.046 killing process with pid 1261919 00:21:18.046 12:49:16 -- common/autotest_common.sh@955 -- # kill 1261919 00:21:18.046 Received shutdown signal, test time was about 2.000000 seconds 00:21:18.046 00:21:18.046 Latency(us) 00:21:18.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.046 =================================================================================================================== 00:21:18.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.046 12:49:16 -- common/autotest_common.sh@960 -- # wait 1261919 00:21:18.304 12:49:17 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:18.304 12:49:17 -- host/digest.sh@54 -- # local rw bs qd 00:21:18.304 12:49:17 -- host/digest.sh@56 -- # rw=randwrite 00:21:18.304 12:49:17 -- host/digest.sh@56 -- # bs=131072 00:21:18.304 12:49:17 -- host/digest.sh@56 -- # qd=16 00:21:18.304 12:49:17 -- host/digest.sh@58 -- # bperfpid=1262454 00:21:18.304 12:49:17 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:18.304 12:49:17 -- host/digest.sh@60 -- # waitforlisten 1262454 /var/tmp/bperf.sock 00:21:18.304 12:49:17 -- common/autotest_common.sh@817 -- # '[' -z 1262454 ']' 00:21:18.304 12:49:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:18.304 12:49:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:18.304 12:49:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:18.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:18.304 12:49:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:18.304 12:49:17 -- common/autotest_common.sh@10 -- # set +x 00:21:18.304 [2024-04-16 12:49:17.181272] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:18.304 [2024-04-16 12:49:17.181346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262454 ] 00:21:18.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:18.304 Zero copy mechanism will not be used. 00:21:18.304 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.304 [2024-04-16 12:49:17.251272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.304 [2024-04-16 12:49:17.357002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.562 12:49:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.562 12:49:17 -- common/autotest_common.sh@850 -- # return 0 00:21:18.562 12:49:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.562 12:49:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.820 12:49:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:18.820 12:49:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.820 12:49:17 -- common/autotest_common.sh@10 -- # set +x 00:21:18.820 12:49:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.820 12:49:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:18.820 12:49:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.078 nvme0n1 00:21:19.078 12:49:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:19.078 12:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.078 12:49:18 -- common/autotest_common.sh@10 -- # set +x 00:21:19.078 12:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.078 12:49:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:19.078 12:49:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.336 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:19.336 Zero copy mechanism will not be used. 00:21:19.336 Running I/O for 2 seconds... 00:21:19.336 [2024-04-16 12:49:18.265073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.336 [2024-04-16 12:49:18.265500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.265554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.274880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.275250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.275287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.285249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.285644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.285688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.295467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.295800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.295829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.304903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.305270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.305303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.315218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.315589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.315631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.325552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.325929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.325966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.335536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.335903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.335936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.344919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.345291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.345323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.355055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.355410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.355441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.365530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.365884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.365916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.375326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.375710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.375754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.385163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.385537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.385586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.337 [2024-04-16 12:49:18.395702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.337 [2024-04-16 12:49:18.396056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.337 [2024-04-16 12:49:18.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.405911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.406270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.406309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.416016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.416362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.416394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.425631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.426063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.426095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.435324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.435684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.435711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.445427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.445772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.445799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.454656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.455018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.455045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.464494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.464825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.464858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.473147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.473514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.473545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.481961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.482078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.482113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.491301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.491677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.491705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.500399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.596 [2024-04-16 12:49:18.500744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.596 [2024-04-16 12:49:18.500771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.596 [2024-04-16 12:49:18.509243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.509705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.509734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.517876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.518238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.518269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.527354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.527711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.527739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.535983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.536423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.536455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.544814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.545168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.545199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.553662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.554131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.554162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.562747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.563199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.563240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.571968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.572420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.572451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.580892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.581205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.581231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.589710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.590104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.590136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.598833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.599197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.599228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.608084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.608439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.608471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.616985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.617110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.617141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.626143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.626485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.626516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.634776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.635220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.635257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.643779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.644151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.644183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.653481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.653658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.653684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.597 [2024-04-16 12:49:18.662463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.597 [2024-04-16 12:49:18.662793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.597 [2024-04-16 12:49:18.662822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.670609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.670934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.670966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.679211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.679540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.688448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.688781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.688808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.696846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.697185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.697216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.704863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.705234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.705265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.712981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.713333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.713364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.720660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.720977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.721008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.728896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.729271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.729303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.737059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.737372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.737407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.745226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.745556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.745608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.753762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.856 [2024-04-16 12:49:18.754195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-04-16 12:49:18.754226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.856 [2024-04-16 12:49:18.763252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.763662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.763703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.771684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.772062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.772090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.781213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.781571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.781617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.789434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.789792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.789827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.798270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.798581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.798610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.806262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.806597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.806625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.814272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.814618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.814650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.822390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.822733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.822761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.830595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.830948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.830976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.838709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.839031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.839063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.846933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.847219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.847246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.855227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.855514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.855540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.863490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.863793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.863821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.871659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.871964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.871990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.880074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.880357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.880383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.888266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.888584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.888627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.897026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.897386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.897412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.905363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.905676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.905703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.913958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.914252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.914284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.857 [2024-04-16 12:49:18.921993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:19.857 [2024-04-16 12:49:18.922309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-04-16 12:49:18.922348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.930133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.930433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.930460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.938148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.938458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.938494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.945970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.946312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.946338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.953772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.954063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.954090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.961541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.961844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.961871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.969837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.970137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.970163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.977784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.978096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.978122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.985834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.986123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.986149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:18.993968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:18.994320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:18.994356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.002754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.003058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.003102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.011032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.011339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.011366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.018888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.019191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.019219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.026789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.027082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.027108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.035220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.035505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.035532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.043437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.043804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.043831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.051274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.051570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.051613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.059466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.059800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.059837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.067842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.068129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.068155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.117 [2024-04-16 12:49:19.075758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.117 [2024-04-16 12:49:19.076057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.117 [2024-04-16 12:49:19.076084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.083848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.084174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.084203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.091945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.092301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.092327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.099799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.100093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.100119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.107518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.107832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.107875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.115473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.115786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.115822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.123719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.124023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.124054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.131438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.131749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.131777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.139233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.139516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.139542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.147358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.147702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.147730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.155485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.155815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.155853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.163709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.164028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.164055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.171777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.172104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.172131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.118 [2024-04-16 12:49:19.180020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.118 [2024-04-16 12:49:19.180317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.118 [2024-04-16 12:49:19.180345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.188461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.188769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.188797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.197139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.197499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.197525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.205065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.205385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.205411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.213267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.213595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.213623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.221485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.221828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.221855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.229812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.230135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.230169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.237734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.238073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.238100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.245966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.246285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.246324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.254396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.254739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.254769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.262639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.262950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.262977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.270988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.271354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.271380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.279211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.279498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.279525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.287241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.287529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.287579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.295584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.295888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.295941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.304067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.304405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.304431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.313497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.313870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.313913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.322508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.322819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.322848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.332000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.332354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.332380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.342072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.342359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.342385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.350246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.383 [2024-04-16 12:49:19.350645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.383 [2024-04-16 12:49:19.350688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.383 [2024-04-16 12:49:19.359649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.360087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.360121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.369381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.369695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.369738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.378801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.379165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.379191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.387459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.387850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.387893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.396759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.397109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.397137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.405701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.406095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.406142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.415216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.415585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.415613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.424293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.424651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.424679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.433820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.434186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.434212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.384 [2024-04-16 12:49:19.443212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.384 [2024-04-16 12:49:19.443631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.384 [2024-04-16 12:49:19.443666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.452730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.667 [2024-04-16 12:49:19.453167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.667 [2024-04-16 12:49:19.453197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.461512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.667 [2024-04-16 12:49:19.461951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.667 [2024-04-16 12:49:19.461979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.470699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.667 [2024-04-16 12:49:19.471086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.667 [2024-04-16 12:49:19.471113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.479744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.667 [2024-04-16 12:49:19.480142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.667 [2024-04-16 12:49:19.480170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.488613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.667 [2024-04-16 12:49:19.489086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.667 [2024-04-16 12:49:19.489112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.497469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.667 [2024-04-16 12:49:19.497856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.667 [2024-04-16 12:49:19.497906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.506840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.667 [2024-04-16 12:49:19.507279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.667 [2024-04-16 12:49:19.507320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.516969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.667 [2024-04-16 12:49:19.517304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.667 [2024-04-16 12:49:19.517331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.667 [2024-04-16 12:49:19.525629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.525944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.525980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.533996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.534280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.534307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.542748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.543079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.543106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.551227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.551514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.551540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.559606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.559902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.559928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.567793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.568094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.568120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.576239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.576522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.576570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.584374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.584728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.584757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.592389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.592754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.592810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.601538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.601839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.601881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.609480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.609766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.609792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.617292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.617604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.617631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.624677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.625070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.625110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.632665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.632991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.633018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.641370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.641679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.641707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.649689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.650073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.650105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.657512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.657814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.657848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.666499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.666837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.666867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.674914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.675231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.675266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.683000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.683313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.683344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.691303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.691622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.691665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.699821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.700152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.700184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.707966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.708278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.708309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.715858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.716177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.716207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.724319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.724655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.724681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.668 [2024-04-16 12:49:19.732613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.668 [2024-04-16 12:49:19.732919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.668 [2024-04-16 12:49:19.732950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.740807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.741152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.741183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.748768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.749111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.749141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.757027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.757352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.757383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.765558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.765857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.765883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.773829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.774175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.774207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.782275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.782615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.782647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.790756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.791088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.791120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.798932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.799252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.799283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.806894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.807206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.807245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.815280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.815601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.815674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.823765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.928 [2024-04-16 12:49:19.824105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.928 [2024-04-16 12:49:19.824136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.928 [2024-04-16 12:49:19.832466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.832767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.832794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.840750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.841079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.841110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.848677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.848992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.849023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.856731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.857061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.857092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.864996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.865328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.865358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.872915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.873225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.873255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.881434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.881742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.881777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.889660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.889972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.890002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.897657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.897992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.898022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.905755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.906087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.906118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.914155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.914468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.914503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.922106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.922417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.922448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.930366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.930695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.930722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.938468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.938754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.938781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.946169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.946460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.946493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.954845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.955179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.955210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.962878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.963225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.963258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.970904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.971215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.971247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.978886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.979219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.979250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.986924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.987234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.987264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.929 [2024-04-16 12:49:19.994708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:20.929 [2024-04-16 12:49:19.995024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.929 [2024-04-16 12:49:19.995055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.002932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.003293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.003327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.011539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.011950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.011986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.019751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.020084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.020122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.027606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.027947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.027978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.035448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.035779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.035811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.043652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.043979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.044014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.051959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.052273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.052315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.059639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.060053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.060091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.067967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.068287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.068320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.076543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.076845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.076872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.084669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.084983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.085021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.093467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.093825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.093857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.101811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.102304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.102339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.110396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.110741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.110769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.118925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.119238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.119274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.127219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.127530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.127561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.135202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.135512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.135543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.188 [2024-04-16 12:49:20.143175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.188 [2024-04-16 12:49:20.143491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.188 [2024-04-16 12:49:20.143531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.151252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.151617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.151646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.159270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.159647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.159697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.167485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.167789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.167817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.175500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.175807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.175847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.183359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.183706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.183735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.191677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.192001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.192033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.199813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.200158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.200197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.207685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.208019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.208051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.216015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.216335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.216366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.224178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.224490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.224528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.232259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.232588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.232630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.240675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.240988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.241019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.189 [2024-04-16 12:49:20.248670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.189 [2024-04-16 12:49:20.249010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.189 [2024-04-16 12:49:20.249041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.447 [2024-04-16 12:49:20.256885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d35c0) with pdu=0x2000190fef90 00:21:21.447 [2024-04-16 12:49:20.257107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.447 [2024-04-16 12:49:20.257143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.447 00:21:21.447 Latency(us) 00:21:21.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.447 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:21.447 nvme0n1 : 2.00 3621.32 452.67 0.00 0.00 4407.46 3470.98 10971.21 00:21:21.447 =================================================================================================================== 00:21:21.447 Total : 3621.32 452.67 0.00 0.00 4407.46 3470.98 10971.21 00:21:21.447 0 00:21:21.447 12:49:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:21.447 12:49:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:21.447 12:49:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:21.447 12:49:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:21.447 | .driver_specific 00:21:21.447 | .nvme_error 00:21:21.447 | .status_code 00:21:21.447 | .command_transient_transport_error' 00:21:21.705 12:49:20 -- host/digest.sh@71 -- # (( 234 > 0 )) 00:21:21.705 12:49:20 -- host/digest.sh@73 -- # killprocess 1262454 00:21:21.705 12:49:20 -- common/autotest_common.sh@936 -- # '[' -z 1262454 ']' 00:21:21.705 12:49:20 -- common/autotest_common.sh@940 -- # kill -0 1262454 00:21:21.705 12:49:20 -- common/autotest_common.sh@941 -- # uname 00:21:21.705 12:49:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.705 12:49:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1262454 00:21:21.705 12:49:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:21.705 12:49:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:21.705 12:49:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1262454' 00:21:21.705 killing process with pid 1262454 00:21:21.705 12:49:20 -- common/autotest_common.sh@955 -- # kill 1262454 00:21:21.705 Received shutdown signal, test time was about 2.000000 seconds 00:21:21.705 00:21:21.705 Latency(us) 00:21:21.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.705 =================================================================================================================== 00:21:21.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.706 12:49:20 -- common/autotest_common.sh@960 -- # wait 1262454 00:21:21.963 12:49:20 -- host/digest.sh@116 -- # killprocess 1260828 00:21:21.963 12:49:20 -- common/autotest_common.sh@936 -- # '[' -z 1260828 ']' 00:21:21.963 12:49:20 -- common/autotest_common.sh@940 -- # kill -0 1260828 00:21:21.963 12:49:20 -- common/autotest_common.sh@941 -- # uname 00:21:21.963 12:49:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.963 12:49:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1260828 00:21:21.963 12:49:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:21.963 12:49:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:21.963 12:49:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1260828' 00:21:21.963 killing process with pid 1260828 00:21:21.963 12:49:20 -- common/autotest_common.sh@955 -- # kill 1260828 00:21:21.963 12:49:20 -- common/autotest_common.sh@960 -- # wait 1260828 00:21:22.222 00:21:22.222 real 0m17.068s 00:21:22.222 user 0m33.062s 00:21:22.222 sys 0m4.820s 00:21:22.222 12:49:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:22.222 12:49:21 -- common/autotest_common.sh@10 -- # set +x 00:21:22.222 ************************************ 00:21:22.222 END TEST nvmf_digest_error 00:21:22.222 ************************************ 00:21:22.222 12:49:21 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:22.222 12:49:21 -- host/digest.sh@150 -- # nvmftestfini 00:21:22.222 12:49:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:22.222 12:49:21 -- nvmf/common.sh@117 -- # sync 00:21:22.222 12:49:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.222 12:49:21 -- nvmf/common.sh@120 -- # set +e 00:21:22.222 12:49:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.222 12:49:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.222 rmmod nvme_tcp 00:21:22.222 rmmod nvme_fabrics 00:21:22.222 rmmod nvme_keyring 00:21:22.222 12:49:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.222 12:49:21 -- nvmf/common.sh@124 -- # set -e 00:21:22.222 12:49:21 -- nvmf/common.sh@125 -- # return 0 00:21:22.222 12:49:21 -- nvmf/common.sh@478 -- # '[' -n 1260828 ']' 00:21:22.222 12:49:21 -- nvmf/common.sh@479 -- # killprocess 1260828 00:21:22.222 12:49:21 -- common/autotest_common.sh@936 -- # '[' -z 1260828 ']' 00:21:22.222 12:49:21 -- common/autotest_common.sh@940 -- # kill -0 1260828 00:21:22.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1260828) - No such process 00:21:22.222 12:49:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1260828 is not found' 00:21:22.222 Process with pid 1260828 is not found 00:21:22.222 12:49:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:22.222 12:49:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:22.222 12:49:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:22.222 12:49:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.222 12:49:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.222 12:49:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.222 12:49:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.222 12:49:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.753 12:49:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:24.753 00:21:24.753 real 0m40.786s 00:21:24.753 user 1m11.629s 00:21:24.753 sys 0m11.576s 00:21:24.753 12:49:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:24.753 12:49:23 -- common/autotest_common.sh@10 -- # set +x 00:21:24.753 ************************************ 00:21:24.753 END TEST nvmf_digest 00:21:24.753 ************************************ 00:21:24.753 12:49:23 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:21:24.753 12:49:23 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:21:24.753 12:49:23 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:21:24.753 12:49:23 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:24.753 12:49:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:24.753 12:49:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:24.753 12:49:23 -- common/autotest_common.sh@10 -- # set +x 00:21:24.753 ************************************ 00:21:24.753 START TEST nvmf_bdevperf 00:21:24.753 ************************************ 00:21:24.753 12:49:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:24.753 * Looking for test storage... 00:21:24.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:24.753 12:49:23 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.753 12:49:23 -- nvmf/common.sh@7 -- # uname -s 00:21:24.753 12:49:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.753 12:49:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.753 12:49:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.754 12:49:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.754 12:49:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.754 12:49:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.754 12:49:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.754 12:49:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.754 12:49:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.754 12:49:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.754 12:49:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:24.754 12:49:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:24.754 12:49:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.754 12:49:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.754 12:49:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.754 12:49:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.754 12:49:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.754 12:49:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.754 12:49:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.754 12:49:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.754 12:49:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.754 12:49:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.754 12:49:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.754 12:49:23 -- paths/export.sh@5 -- # export PATH 00:21:24.754 12:49:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.754 12:49:23 -- nvmf/common.sh@47 -- # : 0 00:21:24.754 12:49:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.754 12:49:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.754 12:49:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.754 12:49:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.754 12:49:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.754 12:49:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.754 12:49:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.754 12:49:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.754 12:49:23 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:24.754 12:49:23 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:24.754 12:49:23 -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:24.754 12:49:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:24.754 12:49:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.754 12:49:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:24.754 12:49:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:24.754 12:49:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:24.754 12:49:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.754 12:49:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.754 12:49:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.754 12:49:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:24.754 12:49:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:24.754 12:49:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:24.754 12:49:23 -- common/autotest_common.sh@10 -- # set +x 00:21:27.283 12:49:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:27.283 12:49:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.283 12:49:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.283 12:49:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.283 12:49:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.283 12:49:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.283 12:49:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.283 12:49:26 -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.283 12:49:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.283 12:49:26 -- nvmf/common.sh@296 -- # e810=() 00:21:27.283 12:49:26 -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.283 12:49:26 -- nvmf/common.sh@297 -- # x722=() 00:21:27.283 12:49:26 -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.283 12:49:26 -- nvmf/common.sh@298 -- # mlx=() 00:21:27.283 12:49:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.283 12:49:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.283 12:49:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.283 12:49:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.283 12:49:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.283 12:49:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.283 12:49:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:27.283 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:27.283 12:49:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.283 12:49:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:27.283 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:27.283 12:49:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.283 12:49:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.283 12:49:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.283 12:49:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.283 12:49:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.283 12:49:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:27.283 Found net devices under 0000:82:00.0: cvl_0_0 00:21:27.283 12:49:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.283 12:49:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.283 12:49:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.283 12:49:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:27.283 12:49:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.283 12:49:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:27.283 Found net devices under 0000:82:00.1: cvl_0_1 00:21:27.283 12:49:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.283 12:49:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:27.283 12:49:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:27.283 12:49:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:27.283 12:49:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.283 12:49:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.283 12:49:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.283 12:49:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.283 12:49:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.283 12:49:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.283 12:49:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.283 12:49:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.283 12:49:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.283 12:49:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.283 12:49:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.283 12:49:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.283 12:49:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.283 12:49:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.283 12:49:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.283 12:49:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.283 12:49:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.283 12:49:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.283 12:49:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.283 12:49:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:21:27.283 00:21:27.283 --- 10.0.0.2 ping statistics --- 00:21:27.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.283 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:27.283 12:49:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:21:27.283 00:21:27.283 --- 10.0.0.1 ping statistics --- 00:21:27.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.283 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:27.283 12:49:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.283 12:49:26 -- nvmf/common.sh@411 -- # return 0 00:21:27.283 12:49:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:27.283 12:49:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.283 12:49:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:27.283 12:49:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.283 12:49:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:27.283 12:49:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:27.283 12:49:26 -- host/bdevperf.sh@25 -- # tgt_init 00:21:27.283 12:49:26 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:27.283 12:49:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:27.283 12:49:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:27.283 12:49:26 -- common/autotest_common.sh@10 -- # set +x 00:21:27.283 12:49:26 -- nvmf/common.sh@470 -- # nvmfpid=1265226 00:21:27.283 12:49:26 -- nvmf/common.sh@471 -- # waitforlisten 1265226 00:21:27.283 12:49:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:27.283 12:49:26 -- common/autotest_common.sh@817 -- # '[' -z 1265226 ']' 00:21:27.283 12:49:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.283 12:49:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:27.283 12:49:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.283 12:49:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:27.283 12:49:26 -- common/autotest_common.sh@10 -- # set +x 00:21:27.283 [2024-04-16 12:49:26.266334] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:27.283 [2024-04-16 12:49:26.266417] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.283 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.283 [2024-04-16 12:49:26.342214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.541 [2024-04-16 12:49:26.449528] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.541 [2024-04-16 12:49:26.449608] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.541 [2024-04-16 12:49:26.449633] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.541 [2024-04-16 12:49:26.449644] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.541 [2024-04-16 12:49:26.449654] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.541 [2024-04-16 12:49:26.449739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.541 [2024-04-16 12:49:26.449807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.541 [2024-04-16 12:49:26.449805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.475 12:49:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:28.475 12:49:27 -- common/autotest_common.sh@850 -- # return 0 00:21:28.475 12:49:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:28.475 12:49:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:28.475 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:28.475 12:49:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.475 12:49:27 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.475 12:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.475 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:28.475 [2024-04-16 12:49:27.220580] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.475 12:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.475 12:49:27 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:28.475 12:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.475 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:28.475 Malloc0 00:21:28.475 12:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.475 12:49:27 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.475 12:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.475 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:28.475 12:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.475 12:49:27 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.475 12:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.475 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:28.475 12:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.475 12:49:27 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.475 12:49:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.475 12:49:27 -- common/autotest_common.sh@10 -- # set +x 00:21:28.475 [2024-04-16 12:49:27.287349] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.475 12:49:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.475 12:49:27 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:21:28.475 12:49:27 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:21:28.475 12:49:27 -- nvmf/common.sh@521 -- # config=() 00:21:28.475 12:49:27 -- nvmf/common.sh@521 -- # local subsystem config 00:21:28.475 12:49:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:28.475 12:49:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:28.475 { 00:21:28.475 "params": { 00:21:28.475 "name": "Nvme$subsystem", 00:21:28.475 "trtype": "$TEST_TRANSPORT", 00:21:28.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.475 "adrfam": "ipv4", 00:21:28.475 "trsvcid": "$NVMF_PORT", 00:21:28.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.475 "hdgst": ${hdgst:-false}, 00:21:28.475 "ddgst": ${ddgst:-false} 00:21:28.475 }, 00:21:28.475 "method": "bdev_nvme_attach_controller" 00:21:28.475 } 00:21:28.475 EOF 00:21:28.475 )") 00:21:28.475 12:49:27 -- nvmf/common.sh@543 -- # cat 00:21:28.475 12:49:27 -- nvmf/common.sh@545 -- # jq . 00:21:28.475 12:49:27 -- nvmf/common.sh@546 -- # IFS=, 00:21:28.475 12:49:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:28.475 "params": { 00:21:28.475 "name": "Nvme1", 00:21:28.475 "trtype": "tcp", 00:21:28.475 "traddr": "10.0.0.2", 00:21:28.475 "adrfam": "ipv4", 00:21:28.475 "trsvcid": "4420", 00:21:28.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.475 "hdgst": false, 00:21:28.475 "ddgst": false 00:21:28.475 }, 00:21:28.475 "method": "bdev_nvme_attach_controller" 00:21:28.475 }' 00:21:28.475 [2024-04-16 12:49:27.333473] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:28.475 [2024-04-16 12:49:27.333556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265377 ] 00:21:28.475 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.475 [2024-04-16 12:49:27.403149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.475 [2024-04-16 12:49:27.513158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.475 [2024-04-16 12:49:27.521999] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:21:28.733 Running I/O for 1 seconds... 00:21:30.108 00:21:30.108 Latency(us) 00:21:30.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.108 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.108 Verification LBA range: start 0x0 length 0x4000 00:21:30.108 Nvme1n1 : 1.01 8563.70 33.45 0.00 0.00 14886.42 3155.44 13398.47 00:21:30.108 =================================================================================================================== 00:21:30.108 Total : 8563.70 33.45 0.00 0.00 14886.42 3155.44 13398.47 00:21:30.108 12:49:29 -- host/bdevperf.sh@30 -- # bdevperfpid=1265527 00:21:30.108 12:49:29 -- host/bdevperf.sh@32 -- # sleep 3 00:21:30.108 12:49:29 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:30.108 12:49:29 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:30.108 12:49:29 -- nvmf/common.sh@521 -- # config=() 00:21:30.108 12:49:29 -- nvmf/common.sh@521 -- # local subsystem config 00:21:30.108 12:49:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:30.108 12:49:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:30.108 { 00:21:30.108 "params": { 00:21:30.108 "name": "Nvme$subsystem", 00:21:30.108 "trtype": "$TEST_TRANSPORT", 00:21:30.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.108 "adrfam": "ipv4", 00:21:30.108 "trsvcid": "$NVMF_PORT", 00:21:30.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.108 "hdgst": ${hdgst:-false}, 00:21:30.108 "ddgst": ${ddgst:-false} 00:21:30.108 }, 00:21:30.108 "method": "bdev_nvme_attach_controller" 00:21:30.108 } 00:21:30.108 EOF 00:21:30.108 )") 00:21:30.108 12:49:29 -- nvmf/common.sh@543 -- # cat 00:21:30.108 12:49:29 -- nvmf/common.sh@545 -- # jq . 00:21:30.108 12:49:29 -- nvmf/common.sh@546 -- # IFS=, 00:21:30.108 12:49:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:30.108 "params": { 00:21:30.108 "name": "Nvme1", 00:21:30.108 "trtype": "tcp", 00:21:30.108 "traddr": "10.0.0.2", 00:21:30.108 "adrfam": "ipv4", 00:21:30.108 "trsvcid": "4420", 00:21:30.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.108 "hdgst": false, 00:21:30.108 "ddgst": false 00:21:30.108 }, 00:21:30.108 "method": "bdev_nvme_attach_controller" 00:21:30.108 }' 00:21:30.108 [2024-04-16 12:49:29.058539] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:30.108 [2024-04-16 12:49:29.058648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265527 ] 00:21:30.108 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.108 [2024-04-16 12:49:29.127403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.366 [2024-04-16 12:49:29.236714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.366 [2024-04-16 12:49:29.245505] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:21:30.624 Running I/O for 15 seconds... 00:21:33.157 12:49:32 -- host/bdevperf.sh@33 -- # kill -9 1265226 00:21:33.157 12:49:32 -- host/bdevperf.sh@35 -- # sleep 3 00:21:33.157 [2024-04-16 12:49:32.028790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.157 [2024-04-16 12:49:32.028858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.157 [2024-04-16 12:49:32.028892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.157 [2024-04-16 12:49:32.028912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.157 [2024-04-16 12:49:32.028933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.157 [2024-04-16 12:49:32.028951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.028971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.028988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.029969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.029988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.158 [2024-04-16 12:49:32.030269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.158 [2024-04-16 12:49:32.030286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.030975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.030991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.159 [2024-04-16 12:49:32.031573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.159 [2024-04-16 12:49:32.031591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.160 [2024-04-16 12:49:32.031621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.031972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.031988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.160 [2024-04-16 12:49:32.032940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.160 [2024-04-16 12:49:32.032956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.032973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.161 [2024-04-16 12:49:32.032989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.161 [2024-04-16 12:49:32.033022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.161 [2024-04-16 12:49:32.033059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.161 [2024-04-16 12:49:32.033092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.161 [2024-04-16 12:49:32.033125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.161 [2024-04-16 12:49:32.033158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.161 [2024-04-16 12:49:32.033191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb60a80 is same with the state(5) to be set 00:21:33.161 [2024-04-16 12:49:32.033225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:33.161 [2024-04-16 12:49:32.033238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:33.161 [2024-04-16 12:49:32.033251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34888 len:8 PRP1 0x0 PRP2 0x0 00:21:33.161 [2024-04-16 12:49:32.033265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033330] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb60a80 was disconnected and freed. reset controller. 00:21:33.161 [2024-04-16 12:49:32.033410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.161 [2024-04-16 12:49:32.033434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.161 [2024-04-16 12:49:32.033465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.161 [2024-04-16 12:49:32.033502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.161 [2024-04-16 12:49:32.033534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.161 [2024-04-16 12:49:32.033548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.161 [2024-04-16 12:49:32.037300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.161 [2024-04-16 12:49:32.037349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.161 [2024-04-16 12:49:32.038054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.038241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.038293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.161 [2024-04-16 12:49:32.038311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.161 [2024-04-16 12:49:32.038550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.161 [2024-04-16 12:49:32.038795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.161 [2024-04-16 12:49:32.038817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.161 [2024-04-16 12:49:32.038833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.161 [2024-04-16 12:49:32.042446] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.161 [2024-04-16 12:49:32.051435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.161 [2024-04-16 12:49:32.051830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.052001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.052030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.161 [2024-04-16 12:49:32.052049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.161 [2024-04-16 12:49:32.052286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.161 [2024-04-16 12:49:32.052529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.161 [2024-04-16 12:49:32.052553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.161 [2024-04-16 12:49:32.052577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.161 [2024-04-16 12:49:32.056163] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.161 [2024-04-16 12:49:32.065369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.161 [2024-04-16 12:49:32.065783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.065952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.065981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.161 [2024-04-16 12:49:32.065999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.161 [2024-04-16 12:49:32.066237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.161 [2024-04-16 12:49:32.066489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.161 [2024-04-16 12:49:32.066514] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.161 [2024-04-16 12:49:32.066530] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.161 [2024-04-16 12:49:32.070099] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.161 [2024-04-16 12:49:32.079301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.161 [2024-04-16 12:49:32.079773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.080020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.080048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.161 [2024-04-16 12:49:32.080067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.161 [2024-04-16 12:49:32.080304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.161 [2024-04-16 12:49:32.080552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.161 [2024-04-16 12:49:32.080584] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.161 [2024-04-16 12:49:32.080601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.161 [2024-04-16 12:49:32.084144] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.161 [2024-04-16 12:49:32.093136] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.161 [2024-04-16 12:49:32.093590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.093812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.093841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.161 [2024-04-16 12:49:32.093859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.161 [2024-04-16 12:49:32.094098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.161 [2024-04-16 12:49:32.094341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.161 [2024-04-16 12:49:32.094365] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.161 [2024-04-16 12:49:32.094381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.161 [2024-04-16 12:49:32.097938] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.161 [2024-04-16 12:49:32.106954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.161 [2024-04-16 12:49:32.107411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.107580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.161 [2024-04-16 12:49:32.107608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.162 [2024-04-16 12:49:32.107626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.162 [2024-04-16 12:49:32.107864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.162 [2024-04-16 12:49:32.108106] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.162 [2024-04-16 12:49:32.108130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.162 [2024-04-16 12:49:32.108147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.162 [2024-04-16 12:49:32.111702] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.162 [2024-04-16 12:49:32.120944] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.162 [2024-04-16 12:49:32.121439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.121641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.121677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.162 [2024-04-16 12:49:32.121696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.162 [2024-04-16 12:49:32.121943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.162 [2024-04-16 12:49:32.122186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.162 [2024-04-16 12:49:32.122211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.162 [2024-04-16 12:49:32.122227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.162 [2024-04-16 12:49:32.125782] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.162 [2024-04-16 12:49:32.134771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.162 [2024-04-16 12:49:32.135242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.135466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.135517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.162 [2024-04-16 12:49:32.135535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.162 [2024-04-16 12:49:32.135782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.162 [2024-04-16 12:49:32.136024] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.162 [2024-04-16 12:49:32.136048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.162 [2024-04-16 12:49:32.136064] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.162 [2024-04-16 12:49:32.139616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.162 [2024-04-16 12:49:32.148606] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.162 [2024-04-16 12:49:32.149027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.149266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.149317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.162 [2024-04-16 12:49:32.149335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.162 [2024-04-16 12:49:32.149583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.162 [2024-04-16 12:49:32.149825] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.162 [2024-04-16 12:49:32.149858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.162 [2024-04-16 12:49:32.149874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.162 [2024-04-16 12:49:32.153415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.162 [2024-04-16 12:49:32.162403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.162 [2024-04-16 12:49:32.162876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.163074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.163134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.162 [2024-04-16 12:49:32.163158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.162 [2024-04-16 12:49:32.163397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.162 [2024-04-16 12:49:32.163651] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.162 [2024-04-16 12:49:32.163677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.162 [2024-04-16 12:49:32.163692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.162 [2024-04-16 12:49:32.167241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.162 [2024-04-16 12:49:32.176233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.162 [2024-04-16 12:49:32.176631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.176857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.176912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.162 [2024-04-16 12:49:32.176930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.162 [2024-04-16 12:49:32.177167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.162 [2024-04-16 12:49:32.177408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.162 [2024-04-16 12:49:32.177433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.162 [2024-04-16 12:49:32.177449] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.162 [2024-04-16 12:49:32.181009] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.162 [2024-04-16 12:49:32.190226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.162 [2024-04-16 12:49:32.190685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.190879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.162 [2024-04-16 12:49:32.190936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.162 [2024-04-16 12:49:32.190953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.162 [2024-04-16 12:49:32.191190] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.162 [2024-04-16 12:49:32.191431] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.163 [2024-04-16 12:49:32.191456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.163 [2024-04-16 12:49:32.191472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.163 [2024-04-16 12:49:32.194993] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.163 [2024-04-16 12:49:32.204090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.163 [2024-04-16 12:49:32.204546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.163 [2024-04-16 12:49:32.204703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.163 [2024-04-16 12:49:32.204727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.163 [2024-04-16 12:49:32.204743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.163 [2024-04-16 12:49:32.204988] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.163 [2024-04-16 12:49:32.205231] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.163 [2024-04-16 12:49:32.205256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.163 [2024-04-16 12:49:32.205271] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.163 [2024-04-16 12:49:32.208773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.163 [2024-04-16 12:49:32.217865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.163 [2024-04-16 12:49:32.218338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.163 [2024-04-16 12:49:32.218491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.163 [2024-04-16 12:49:32.218519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.163 [2024-04-16 12:49:32.218537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.163 [2024-04-16 12:49:32.218805] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.163 [2024-04-16 12:49:32.219046] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.163 [2024-04-16 12:49:32.219067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.163 [2024-04-16 12:49:32.219094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.163 [2024-04-16 12:49:32.222628] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.422 [2024-04-16 12:49:32.231824] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.422 [2024-04-16 12:49:32.232323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.232577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.232608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.422 [2024-04-16 12:49:32.232626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.422 [2024-04-16 12:49:32.232863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.422 [2024-04-16 12:49:32.233105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.422 [2024-04-16 12:49:32.233129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.422 [2024-04-16 12:49:32.233145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.422 [2024-04-16 12:49:32.236695] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.422 [2024-04-16 12:49:32.245688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.422 [2024-04-16 12:49:32.246187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.246359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.246407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.422 [2024-04-16 12:49:32.246426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.422 [2024-04-16 12:49:32.246675] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.422 [2024-04-16 12:49:32.246923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.422 [2024-04-16 12:49:32.246948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.422 [2024-04-16 12:49:32.246964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.422 [2024-04-16 12:49:32.250509] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.422 [2024-04-16 12:49:32.259498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.422 [2024-04-16 12:49:32.259919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.260093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.260146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.422 [2024-04-16 12:49:32.260164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.422 [2024-04-16 12:49:32.260401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.422 [2024-04-16 12:49:32.260659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.422 [2024-04-16 12:49:32.260684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.422 [2024-04-16 12:49:32.260700] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.422 [2024-04-16 12:49:32.264246] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.422 [2024-04-16 12:49:32.273446] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.422 [2024-04-16 12:49:32.273923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.274111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.274168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.422 [2024-04-16 12:49:32.274186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.422 [2024-04-16 12:49:32.274424] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.422 [2024-04-16 12:49:32.274676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.422 [2024-04-16 12:49:32.274701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.422 [2024-04-16 12:49:32.274717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.422 [2024-04-16 12:49:32.278267] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.422 [2024-04-16 12:49:32.287254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.422 [2024-04-16 12:49:32.287746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.422 [2024-04-16 12:49:32.287934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.287982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.288000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.288237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.288480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.288509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.288526] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.292079] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.301066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.301453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.301641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.301671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.301689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.301926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.302169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.302193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.302209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.305761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.314980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.315433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.315633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.315664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.315682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.315920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.316162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.316186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.316202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.319754] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.328955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.329416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.329582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.329612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.329630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.329868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.330110] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.330134] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.330155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.333707] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.342905] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.343408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.343620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.343650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.343669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.343916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.344159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.344183] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.344199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.347749] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.356736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.357242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.357411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.357440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.357458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.357717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.357960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.357984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.358000] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.361543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.370532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.371050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.371238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.371286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.371304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.371547] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.371811] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.371835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.371851] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.375393] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.384404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.384831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.385039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.385079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.385098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.385336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.385592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.385624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.385640] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.389189] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.398400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.398824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.399034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.399083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.399107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.399343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.423 [2024-04-16 12:49:32.399598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.423 [2024-04-16 12:49:32.399644] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.423 [2024-04-16 12:49:32.399660] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.423 [2024-04-16 12:49:32.403218] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.423 [2024-04-16 12:49:32.412219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.423 [2024-04-16 12:49:32.412679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.412837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.423 [2024-04-16 12:49:32.412866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.423 [2024-04-16 12:49:32.412884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.423 [2024-04-16 12:49:32.413121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.424 [2024-04-16 12:49:32.413363] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.424 [2024-04-16 12:49:32.413389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.424 [2024-04-16 12:49:32.413405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.424 [2024-04-16 12:49:32.416962] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.424 [2024-04-16 12:49:32.426163] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.424 [2024-04-16 12:49:32.426681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.426920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.426970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.424 [2024-04-16 12:49:32.426988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.424 [2024-04-16 12:49:32.427226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.424 [2024-04-16 12:49:32.427469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.424 [2024-04-16 12:49:32.427494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.424 [2024-04-16 12:49:32.427510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.424 [2024-04-16 12:49:32.431070] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.424 [2024-04-16 12:49:32.440073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.424 [2024-04-16 12:49:32.440611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.440887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.440938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.424 [2024-04-16 12:49:32.440957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.424 [2024-04-16 12:49:32.441195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.424 [2024-04-16 12:49:32.441438] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.424 [2024-04-16 12:49:32.441463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.424 [2024-04-16 12:49:32.441479] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.424 [2024-04-16 12:49:32.445036] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.424 [2024-04-16 12:49:32.454032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.424 [2024-04-16 12:49:32.454546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.454820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.454849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.424 [2024-04-16 12:49:32.454867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.424 [2024-04-16 12:49:32.455105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.424 [2024-04-16 12:49:32.455348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.424 [2024-04-16 12:49:32.455373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.424 [2024-04-16 12:49:32.455389] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.424 [2024-04-16 12:49:32.458948] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.424 [2024-04-16 12:49:32.467945] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.424 [2024-04-16 12:49:32.468601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.468908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.468964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.424 [2024-04-16 12:49:32.468984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.424 [2024-04-16 12:49:32.469228] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.424 [2024-04-16 12:49:32.469473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.424 [2024-04-16 12:49:32.469498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.424 [2024-04-16 12:49:32.469514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.424 [2024-04-16 12:49:32.473077] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.424 [2024-04-16 12:49:32.481869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.424 [2024-04-16 12:49:32.482403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.482645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.424 [2024-04-16 12:49:32.482681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.424 [2024-04-16 12:49:32.482699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.424 [2024-04-16 12:49:32.482937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.424 [2024-04-16 12:49:32.483181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.424 [2024-04-16 12:49:32.483207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.424 [2024-04-16 12:49:32.483223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.424 [2024-04-16 12:49:32.486783] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.684 [2024-04-16 12:49:32.495788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.684 [2024-04-16 12:49:32.496325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.684 [2024-04-16 12:49:32.496599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.684 [2024-04-16 12:49:32.496630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.684 [2024-04-16 12:49:32.496648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.684 [2024-04-16 12:49:32.496887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.684 [2024-04-16 12:49:32.497129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.684 [2024-04-16 12:49:32.497154] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.684 [2024-04-16 12:49:32.497170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.684 [2024-04-16 12:49:32.500724] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.684 [2024-04-16 12:49:32.509723] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.684 [2024-04-16 12:49:32.510286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.684 [2024-04-16 12:49:32.510549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.684 [2024-04-16 12:49:32.510590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.684 [2024-04-16 12:49:32.510615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.684 [2024-04-16 12:49:32.510855] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.684 [2024-04-16 12:49:32.511099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.684 [2024-04-16 12:49:32.511125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.684 [2024-04-16 12:49:32.511141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.684 [2024-04-16 12:49:32.514697] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.684 [2024-04-16 12:49:32.523696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.684 [2024-04-16 12:49:32.524227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.684 [2024-04-16 12:49:32.524525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.684 [2024-04-16 12:49:32.524587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.684 [2024-04-16 12:49:32.524607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.684 [2024-04-16 12:49:32.524845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.684 [2024-04-16 12:49:32.525087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.684 [2024-04-16 12:49:32.525112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.684 [2024-04-16 12:49:32.525128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.684 [2024-04-16 12:49:32.528680] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.684 [2024-04-16 12:49:32.537701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.684 [2024-04-16 12:49:32.538189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.684 [2024-04-16 12:49:32.538401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.538451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.538469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.538718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.538963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.538988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.539004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.542556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.551591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.552113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.552317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.552367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.552385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.552639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.552883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.552909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.552925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.556470] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.565476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.566002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.566358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.566409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.566427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.566681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.566924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.566949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.566966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.570511] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.579305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.579828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.580147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.580177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.580194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.580432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.580688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.580714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.580730] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.584274] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.593284] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.593930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.594271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.594322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.594341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.594601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.594851] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.594877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.594893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.598445] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.607246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.607768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.608063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.608115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.608134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.608373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.608628] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.608654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.608670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.612219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.621221] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.621714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.621929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.621987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.622006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.622243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.622486] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.622512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.622528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.626087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.635086] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.635578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.635843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.635872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.635890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.636128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.636371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.636401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.636417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.639971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.648968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.649453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.649722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.649754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.649772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.650010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.685 [2024-04-16 12:49:32.650254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.685 [2024-04-16 12:49:32.650280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.685 [2024-04-16 12:49:32.650296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.685 [2024-04-16 12:49:32.653849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.685 [2024-04-16 12:49:32.662855] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.685 [2024-04-16 12:49:32.663349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.663591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.685 [2024-04-16 12:49:32.663621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.685 [2024-04-16 12:49:32.663639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.685 [2024-04-16 12:49:32.663876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.686 [2024-04-16 12:49:32.664118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.686 [2024-04-16 12:49:32.664142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.686 [2024-04-16 12:49:32.664158] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.686 [2024-04-16 12:49:32.667718] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.686 [2024-04-16 12:49:32.676720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.686 [2024-04-16 12:49:32.677232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.677535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.677596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.686 [2024-04-16 12:49:32.677615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.686 [2024-04-16 12:49:32.677856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.686 [2024-04-16 12:49:32.678098] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.686 [2024-04-16 12:49:32.678123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.686 [2024-04-16 12:49:32.678147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.686 [2024-04-16 12:49:32.681704] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.686 [2024-04-16 12:49:32.690705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.686 [2024-04-16 12:49:32.691204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.691536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.691576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.686 [2024-04-16 12:49:32.691597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.686 [2024-04-16 12:49:32.691836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.686 [2024-04-16 12:49:32.692077] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.686 [2024-04-16 12:49:32.692103] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.686 [2024-04-16 12:49:32.692118] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.686 [2024-04-16 12:49:32.695673] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.686 [2024-04-16 12:49:32.704673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.686 [2024-04-16 12:49:32.705149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.705471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.705520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.686 [2024-04-16 12:49:32.705538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.686 [2024-04-16 12:49:32.705789] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.686 [2024-04-16 12:49:32.706031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.686 [2024-04-16 12:49:32.706057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.686 [2024-04-16 12:49:32.706073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.686 [2024-04-16 12:49:32.709626] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.686 [2024-04-16 12:49:32.718616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.686 [2024-04-16 12:49:32.719133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.719432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.719482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.686 [2024-04-16 12:49:32.719500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.686 [2024-04-16 12:49:32.719754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.686 [2024-04-16 12:49:32.719996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.686 [2024-04-16 12:49:32.720022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.686 [2024-04-16 12:49:32.720038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.686 [2024-04-16 12:49:32.723597] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.686 [2024-04-16 12:49:32.732596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.686 [2024-04-16 12:49:32.733094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.733390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.733434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.686 [2024-04-16 12:49:32.733452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.686 [2024-04-16 12:49:32.733704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.686 [2024-04-16 12:49:32.733947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.686 [2024-04-16 12:49:32.733972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.686 [2024-04-16 12:49:32.733988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.686 [2024-04-16 12:49:32.737535] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.686 [2024-04-16 12:49:32.746531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.686 [2024-04-16 12:49:32.747074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.747358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.686 [2024-04-16 12:49:32.747408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.686 [2024-04-16 12:49:32.747426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.686 [2024-04-16 12:49:32.747681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.686 [2024-04-16 12:49:32.747923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.686 [2024-04-16 12:49:32.747948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.686 [2024-04-16 12:49:32.747965] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.686 [2024-04-16 12:49:32.751511] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.946 [2024-04-16 12:49:32.760517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.946 [2024-04-16 12:49:32.761053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.761310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.761359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.946 [2024-04-16 12:49:32.761377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.946 [2024-04-16 12:49:32.761629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.946 [2024-04-16 12:49:32.761873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.946 [2024-04-16 12:49:32.761898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.946 [2024-04-16 12:49:32.761914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.946 [2024-04-16 12:49:32.765459] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.946 [2024-04-16 12:49:32.774467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.946 [2024-04-16 12:49:32.774916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.775124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.775173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.946 [2024-04-16 12:49:32.775192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.946 [2024-04-16 12:49:32.775436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.946 [2024-04-16 12:49:32.775692] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.946 [2024-04-16 12:49:32.775719] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.946 [2024-04-16 12:49:32.775735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.946 [2024-04-16 12:49:32.779280] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.946 [2024-04-16 12:49:32.788276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.946 [2024-04-16 12:49:32.788778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.788949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.788979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.946 [2024-04-16 12:49:32.788997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.946 [2024-04-16 12:49:32.789236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.946 [2024-04-16 12:49:32.789479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.946 [2024-04-16 12:49:32.789505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.946 [2024-04-16 12:49:32.789521] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.946 [2024-04-16 12:49:32.793082] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.946 [2024-04-16 12:49:32.802090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.946 [2024-04-16 12:49:32.802637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.802842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.802870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.946 [2024-04-16 12:49:32.802888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.946 [2024-04-16 12:49:32.803125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.946 [2024-04-16 12:49:32.803368] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.946 [2024-04-16 12:49:32.803393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.946 [2024-04-16 12:49:32.803409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.946 [2024-04-16 12:49:32.806969] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.946 [2024-04-16 12:49:32.815978] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.946 [2024-04-16 12:49:32.816424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.816654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.816686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.946 [2024-04-16 12:49:32.816704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.946 [2024-04-16 12:49:32.816942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.946 [2024-04-16 12:49:32.817185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.946 [2024-04-16 12:49:32.817210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.946 [2024-04-16 12:49:32.817225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.946 [2024-04-16 12:49:32.820784] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.946 [2024-04-16 12:49:32.829827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.946 [2024-04-16 12:49:32.830339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.830571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.830601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.946 [2024-04-16 12:49:32.830619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.946 [2024-04-16 12:49:32.830857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.946 [2024-04-16 12:49:32.831099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.946 [2024-04-16 12:49:32.831123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.946 [2024-04-16 12:49:32.831139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.946 [2024-04-16 12:49:32.834701] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.946 [2024-04-16 12:49:32.843725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.946 [2024-04-16 12:49:32.844173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.844323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.844352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.946 [2024-04-16 12:49:32.844370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.946 [2024-04-16 12:49:32.844619] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.946 [2024-04-16 12:49:32.844862] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.946 [2024-04-16 12:49:32.844887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.946 [2024-04-16 12:49:32.844903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.946 [2024-04-16 12:49:32.848454] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.946 [2024-04-16 12:49:32.857697] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.946 [2024-04-16 12:49:32.858138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.858317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.946 [2024-04-16 12:49:32.858352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.946 [2024-04-16 12:49:32.858371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.946 [2024-04-16 12:49:32.858622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.858865] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.858890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.858906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.862457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.871700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.872143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.872368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.872418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.872436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.872685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.872928] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.872953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.872969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.876520] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.885541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.885998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.886146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.886175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.886193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.886431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.886687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.886712] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.886727] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.890279] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.899509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.899942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.900114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.900143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.900167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.900405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.900671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.900696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.900712] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.904260] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.913470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.913886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.914108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.914158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.914177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.914419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.914672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.914697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.914712] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.918259] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.927468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.927926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.928140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.928189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.928207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.928445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.928699] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.928724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.928739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.932285] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.941288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.941689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.941868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.941924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.941942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.942186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.942428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.942453] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.942468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.946030] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.955246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.955644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.955802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.955831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.955849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.956087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.956329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.956353] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.956370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.959921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.969122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.969605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.969819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.969869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.969887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.970124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.970366] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.970391] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.947 [2024-04-16 12:49:32.970406] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.947 [2024-04-16 12:49:32.973958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.947 [2024-04-16 12:49:32.982951] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.947 [2024-04-16 12:49:32.983442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.983677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.947 [2024-04-16 12:49:32.983708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.947 [2024-04-16 12:49:32.983726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.947 [2024-04-16 12:49:32.983964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.947 [2024-04-16 12:49:32.984213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.947 [2024-04-16 12:49:32.984239] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.948 [2024-04-16 12:49:32.984254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.948 [2024-04-16 12:49:32.987804] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.948 [2024-04-16 12:49:32.996797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.948 [2024-04-16 12:49:32.997288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.948 [2024-04-16 12:49:32.997499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.948 [2024-04-16 12:49:32.997527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.948 [2024-04-16 12:49:32.997545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.948 [2024-04-16 12:49:32.997792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.948 [2024-04-16 12:49:32.998041] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.948 [2024-04-16 12:49:32.998067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.948 [2024-04-16 12:49:32.998083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.948 [2024-04-16 12:49:33.001638] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:33.948 [2024-04-16 12:49:33.010648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.948 [2024-04-16 12:49:33.011100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.948 [2024-04-16 12:49:33.011370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:33.948 [2024-04-16 12:49:33.011419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:33.948 [2024-04-16 12:49:33.011437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:33.948 [2024-04-16 12:49:33.011690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:33.948 [2024-04-16 12:49:33.011932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:33.948 [2024-04-16 12:49:33.011958] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:33.948 [2024-04-16 12:49:33.011974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.224 [2024-04-16 12:49:33.015522] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.224 [2024-04-16 12:49:33.024544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.224 [2024-04-16 12:49:33.025044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.224 [2024-04-16 12:49:33.025323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.224 [2024-04-16 12:49:33.025374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.224 [2024-04-16 12:49:33.025391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.224 [2024-04-16 12:49:33.025647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.224 [2024-04-16 12:49:33.025889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.224 [2024-04-16 12:49:33.025921] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.224 [2024-04-16 12:49:33.025938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.224 [2024-04-16 12:49:33.029487] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.224 [2024-04-16 12:49:33.038491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.224 [2024-04-16 12:49:33.038973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.224 [2024-04-16 12:49:33.039231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.224 [2024-04-16 12:49:33.039282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.224 [2024-04-16 12:49:33.039299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.224 [2024-04-16 12:49:33.039537] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.224 [2024-04-16 12:49:33.039793] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.224 [2024-04-16 12:49:33.039820] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.224 [2024-04-16 12:49:33.039836] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.224 [2024-04-16 12:49:33.043386] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.224 [2024-04-16 12:49:33.052397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.224 [2024-04-16 12:49:33.052855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.224 [2024-04-16 12:49:33.053126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.224 [2024-04-16 12:49:33.053177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.053196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.053433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.053687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.053713] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.053728] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.057276] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.066275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.066761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.066959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.067010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.067029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.067267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.067509] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.067535] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.067557] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.071224] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.080225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.080727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.080893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.080941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.080959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.081197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.081440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.081466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.081482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.085039] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.094040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.094609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.094829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.094879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.094896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.095134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.095376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.095401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.095417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.098972] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.107972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.108597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.108867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.108916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.108935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.109181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.109426] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.109452] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.109468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.113045] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.121845] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.122373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.122614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.122645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.122663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.122902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.123147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.123173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.123189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.126748] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.135746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.136265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.136500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.136529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.136548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.136809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.137054] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.137079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.137095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.140650] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.149672] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.150144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.150400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.150449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.150467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.150721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.150963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.150988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.151004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.154551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.163551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.164172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.164520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.164584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.164606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.164851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.165094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.165120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.225 [2024-04-16 12:49:33.165137] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.225 [2024-04-16 12:49:33.168702] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.225 [2024-04-16 12:49:33.177501] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.225 [2024-04-16 12:49:33.178010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.178239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.225 [2024-04-16 12:49:33.178290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.225 [2024-04-16 12:49:33.178320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.225 [2024-04-16 12:49:33.178559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.225 [2024-04-16 12:49:33.178817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.225 [2024-04-16 12:49:33.178843] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.226 [2024-04-16 12:49:33.178860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.226 [2024-04-16 12:49:33.182411] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.226 [2024-04-16 12:49:33.191411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.226 [2024-04-16 12:49:33.191913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.192186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.192236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.226 [2024-04-16 12:49:33.192254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.226 [2024-04-16 12:49:33.192492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.226 [2024-04-16 12:49:33.192749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.226 [2024-04-16 12:49:33.192775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.226 [2024-04-16 12:49:33.192791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.226 [2024-04-16 12:49:33.196343] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.226 [2024-04-16 12:49:33.205350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.226 [2024-04-16 12:49:33.205775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.206041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.206091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.226 [2024-04-16 12:49:33.206109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.226 [2024-04-16 12:49:33.206348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.226 [2024-04-16 12:49:33.206603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.226 [2024-04-16 12:49:33.206628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.226 [2024-04-16 12:49:33.206645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.226 [2024-04-16 12:49:33.210220] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.226 [2024-04-16 12:49:33.219227] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.226 [2024-04-16 12:49:33.219651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.219825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.219855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.226 [2024-04-16 12:49:33.219873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.226 [2024-04-16 12:49:33.220110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.226 [2024-04-16 12:49:33.220354] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.226 [2024-04-16 12:49:33.220379] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.226 [2024-04-16 12:49:33.220394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.226 [2024-04-16 12:49:33.223945] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.226 [2024-04-16 12:49:33.233141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.226 [2024-04-16 12:49:33.233571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.233729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.233758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.226 [2024-04-16 12:49:33.233777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.226 [2024-04-16 12:49:33.234014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.226 [2024-04-16 12:49:33.234256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.226 [2024-04-16 12:49:33.234281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.226 [2024-04-16 12:49:33.234297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.226 [2024-04-16 12:49:33.237850] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.226 [2024-04-16 12:49:33.247055] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.226 [2024-04-16 12:49:33.247531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.247785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.247815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.226 [2024-04-16 12:49:33.247839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.226 [2024-04-16 12:49:33.248077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.226 [2024-04-16 12:49:33.248320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.226 [2024-04-16 12:49:33.248345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.226 [2024-04-16 12:49:33.248361] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.226 [2024-04-16 12:49:33.251915] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.226 [2024-04-16 12:49:33.260907] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.226 [2024-04-16 12:49:33.261395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.261685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.261714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.226 [2024-04-16 12:49:33.261732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.226 [2024-04-16 12:49:33.261970] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.226 [2024-04-16 12:49:33.262212] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.226 [2024-04-16 12:49:33.262237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.226 [2024-04-16 12:49:33.262253] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.226 [2024-04-16 12:49:33.265807] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.226 [2024-04-16 12:49:33.274831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.226 [2024-04-16 12:49:33.275308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.275594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.226 [2024-04-16 12:49:33.275633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.226 [2024-04-16 12:49:33.275651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.226 [2024-04-16 12:49:33.275888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.226 [2024-04-16 12:49:33.276132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.226 [2024-04-16 12:49:33.276158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.226 [2024-04-16 12:49:33.276173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.226 [2024-04-16 12:49:33.279724] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.493 [2024-04-16 12:49:33.288730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.493 [2024-04-16 12:49:33.289166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.493 [2024-04-16 12:49:33.289423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.493 [2024-04-16 12:49:33.289475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.493 [2024-04-16 12:49:33.289495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.493 [2024-04-16 12:49:33.289749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.493 [2024-04-16 12:49:33.289992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.290018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.290033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.293586] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.302580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.303002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.303237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.303290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.303308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.494 [2024-04-16 12:49:33.303546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.494 [2024-04-16 12:49:33.303797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.303822] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.303839] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.307385] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.316386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.316819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.317003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.317052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.317071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.494 [2024-04-16 12:49:33.317308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.494 [2024-04-16 12:49:33.317550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.317585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.317603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.321150] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.330364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.330890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.331159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.331211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.331229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.494 [2024-04-16 12:49:33.331467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.494 [2024-04-16 12:49:33.331730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.331755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.331771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.335320] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.344355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.344871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.345184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.345236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.345255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.494 [2024-04-16 12:49:33.345493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.494 [2024-04-16 12:49:33.345751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.345777] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.345792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.349338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.358367] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.358876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.359156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.359204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.359222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.494 [2024-04-16 12:49:33.359459] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.494 [2024-04-16 12:49:33.359712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.359738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.359753] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.363296] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.372304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.372764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.373060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.373110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.373127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.494 [2024-04-16 12:49:33.373365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.494 [2024-04-16 12:49:33.373619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.373650] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.373667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.377210] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.386221] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.386648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.386852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.386904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.386921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.494 [2024-04-16 12:49:33.387159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.494 [2024-04-16 12:49:33.387402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.387426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.387441] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.390996] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.400210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.400685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.400973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.401021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.401039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.494 [2024-04-16 12:49:33.401277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.494 [2024-04-16 12:49:33.401518] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.494 [2024-04-16 12:49:33.401543] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.494 [2024-04-16 12:49:33.401559] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.494 [2024-04-16 12:49:33.405112] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.494 [2024-04-16 12:49:33.414106] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.494 [2024-04-16 12:49:33.414633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.414935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.494 [2024-04-16 12:49:33.414984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.494 [2024-04-16 12:49:33.415002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.415249] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.415491] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.415516] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.415538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.419103] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.428108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.495 [2024-04-16 12:49:33.428651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.428910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.428957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.495 [2024-04-16 12:49:33.428975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.429213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.429454] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.429479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.429495] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.433055] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.442057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.495 [2024-04-16 12:49:33.442585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.442842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.442886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.495 [2024-04-16 12:49:33.442905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.443143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.443386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.443411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.443426] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.446983] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.455977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.495 [2024-04-16 12:49:33.456483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.456690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.456720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.495 [2024-04-16 12:49:33.456738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.456976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.457218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.457243] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.457259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.460816] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.469818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.495 [2024-04-16 12:49:33.470293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.470536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.470583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.495 [2024-04-16 12:49:33.470604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.470844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.471087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.471112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.471128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.474680] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.483669] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.495 [2024-04-16 12:49:33.484171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.484461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.484500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.495 [2024-04-16 12:49:33.484518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.484765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.485007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.485032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.485047] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.488601] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.497592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.495 [2024-04-16 12:49:33.498153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.498427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.498468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.495 [2024-04-16 12:49:33.498486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.498744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.498988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.499014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.499030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.502588] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.511591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.495 [2024-04-16 12:49:33.512097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.512410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.512440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.495 [2024-04-16 12:49:33.512458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.512718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.512960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.512986] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.513003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.516549] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.525548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.495 [2024-04-16 12:49:33.526043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.526339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.495 [2024-04-16 12:49:33.526368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.495 [2024-04-16 12:49:33.526387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.495 [2024-04-16 12:49:33.526636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.495 [2024-04-16 12:49:33.526879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.495 [2024-04-16 12:49:33.526904] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.495 [2024-04-16 12:49:33.526920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.495 [2024-04-16 12:49:33.530494] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.495 [2024-04-16 12:49:33.539484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.496 [2024-04-16 12:49:33.539991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.496 [2024-04-16 12:49:33.540287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.496 [2024-04-16 12:49:33.540316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.496 [2024-04-16 12:49:33.540333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.496 [2024-04-16 12:49:33.540582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.496 [2024-04-16 12:49:33.540824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.496 [2024-04-16 12:49:33.540849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.496 [2024-04-16 12:49:33.540865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.496 [2024-04-16 12:49:33.544412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.496 [2024-04-16 12:49:33.553408] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.496 [2024-04-16 12:49:33.553902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.496 [2024-04-16 12:49:33.554084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.496 [2024-04-16 12:49:33.554112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.496 [2024-04-16 12:49:33.554130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.496 [2024-04-16 12:49:33.554367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.496 [2024-04-16 12:49:33.554621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.496 [2024-04-16 12:49:33.554646] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.496 [2024-04-16 12:49:33.554661] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.496 [2024-04-16 12:49:33.558205] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.754 [2024-04-16 12:49:33.567408] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.754 [2024-04-16 12:49:33.567901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.754 [2024-04-16 12:49:33.568076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.754 [2024-04-16 12:49:33.568105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.754 [2024-04-16 12:49:33.568123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.754 [2024-04-16 12:49:33.568360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.754 [2024-04-16 12:49:33.568614] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.754 [2024-04-16 12:49:33.568640] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.754 [2024-04-16 12:49:33.568657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.572199] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.581402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.581894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.582128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.582158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.582176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.582414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.582672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.582698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.582715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.586260] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.595282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.595853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.596135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.596174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.596194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.596439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.596697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.596723] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.596739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.600292] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.609286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.609788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.610051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.610082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.610100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.610340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.610594] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.610620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.610637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.614181] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.623172] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.623712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.623993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.624022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.624041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.624279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.624521] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.624546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.624561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.628118] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.637105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.637590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.637870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.637899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.637923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.638162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.638405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.638430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.638446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.642001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.650988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.651518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.651755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.651785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.651804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.652043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.652284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.652310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.652326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.655879] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.664877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.665438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.665753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.665786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.665805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.666050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.666293] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.666319] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.666335] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.669899] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.678693] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.679198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.679461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.679491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.679510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.679768] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.680011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.680037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.680053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.683607] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.692596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.693090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.693304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.693333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.693352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.693601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.693844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.693869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.693885] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.697428] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.706416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.706907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.707096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.707125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.707144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.707382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.707637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.707663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.707679] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.711221] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.720419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.720950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.721269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.721299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.721317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.721556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.721815] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.721841] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.721857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.725401] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.734396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.734911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.735168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.735197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.735214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.735452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.735708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.735734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.735750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.739293] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.748279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.748801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.749062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.749091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.749109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.749348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.749602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.749628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.749644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.753184] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.762189] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.762675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.762907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.762936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.762953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.763191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.763432] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.763463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.763480] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.767036] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.776033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.776513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.776754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.776785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.776803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.777041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.777285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.777310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.777327] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.780879] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.789873] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.790350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.790504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.790531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.790548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.790795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.791039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.791065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.791081] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.794635] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.803833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.804350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.804586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.804617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.755 [2024-04-16 12:49:33.804635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.755 [2024-04-16 12:49:33.804874] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.755 [2024-04-16 12:49:33.805118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.755 [2024-04-16 12:49:33.805144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.755 [2024-04-16 12:49:33.805166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.755 [2024-04-16 12:49:33.808720] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.755 [2024-04-16 12:49:33.817711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.755 [2024-04-16 12:49:33.818219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.818476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.755 [2024-04-16 12:49:33.818505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:34.756 [2024-04-16 12:49:33.818523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:34.756 [2024-04-16 12:49:33.818769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:34.756 [2024-04-16 12:49:33.819012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.756 [2024-04-16 12:49:33.819037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.756 [2024-04-16 12:49:33.819052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.014 [2024-04-16 12:49:33.822606] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.014 [2024-04-16 12:49:33.831601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.014 [2024-04-16 12:49:33.832085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.832365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.832394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.014 [2024-04-16 12:49:33.832412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.014 [2024-04-16 12:49:33.832661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.014 [2024-04-16 12:49:33.832905] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.014 [2024-04-16 12:49:33.832931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.014 [2024-04-16 12:49:33.832947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.014 [2024-04-16 12:49:33.836491] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.014 [2024-04-16 12:49:33.845483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.014 [2024-04-16 12:49:33.846017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.846288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.846317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.014 [2024-04-16 12:49:33.846345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.014 [2024-04-16 12:49:33.846595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.014 [2024-04-16 12:49:33.846839] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.014 [2024-04-16 12:49:33.846864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.014 [2024-04-16 12:49:33.846881] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.014 [2024-04-16 12:49:33.850431] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.014 [2024-04-16 12:49:33.859419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.014 [2024-04-16 12:49:33.859939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.860247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.860276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.014 [2024-04-16 12:49:33.860294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.014 [2024-04-16 12:49:33.860533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.014 [2024-04-16 12:49:33.860785] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.014 [2024-04-16 12:49:33.860811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.014 [2024-04-16 12:49:33.860828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.014 [2024-04-16 12:49:33.864372] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.014 [2024-04-16 12:49:33.873369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.014 [2024-04-16 12:49:33.873936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.874175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.874208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.014 [2024-04-16 12:49:33.874237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.014 [2024-04-16 12:49:33.874482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.014 [2024-04-16 12:49:33.874741] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.014 [2024-04-16 12:49:33.874768] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.014 [2024-04-16 12:49:33.874785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.014 [2024-04-16 12:49:33.878332] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.014 [2024-04-16 12:49:33.887325] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.014 [2024-04-16 12:49:33.887795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.887998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.888027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.014 [2024-04-16 12:49:33.888045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.014 [2024-04-16 12:49:33.888283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.014 [2024-04-16 12:49:33.888525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.014 [2024-04-16 12:49:33.888550] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.014 [2024-04-16 12:49:33.888578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.014 [2024-04-16 12:49:33.892139] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.014 [2024-04-16 12:49:33.901142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.014 [2024-04-16 12:49:33.901666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.014 [2024-04-16 12:49:33.901898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.901927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:33.901945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:33.902183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:33.902424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:33.902449] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:33.902465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:33.906017] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:33.915031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:33.915542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.915741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.915780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:33.915798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:33.916037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:33.916278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:33.916303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:33.916319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:33.919873] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:33.928874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:33.929383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.929622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.929652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:33.929671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:33.929908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:33.930151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:33.930176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:33.930192] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:33.933746] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:33.942743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:33.943247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.943509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.943538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:33.943556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:33.943804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:33.944047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:33.944073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:33.944089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:33.947644] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:33.956640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:33.957134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.957339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.957379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:33.957397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:33.957644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:33.957886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:33.957911] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:33.957927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:33.961474] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:33.970477] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:33.970979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.971171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.971200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:33.971218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:33.971456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:33.971708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:33.971733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:33.971748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:33.975296] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:33.984299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:33.984736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.984941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.984970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:33.984998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:33.985236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:33.985479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:33.985504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:33.985519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:33.989074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:33.998305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:33.998741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.998929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:33.998958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:33.998976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:33.999214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:33.999456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:33.999481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:33.999496] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:34.003054] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:34.012276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:34.012685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:34.012845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:34.012875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:34.012894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:34.013131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.015 [2024-04-16 12:49:34.013373] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.015 [2024-04-16 12:49:34.013398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.015 [2024-04-16 12:49:34.013414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.015 [2024-04-16 12:49:34.016979] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.015 [2024-04-16 12:49:34.026197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.015 [2024-04-16 12:49:34.026624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:34.026762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.015 [2024-04-16 12:49:34.026791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.015 [2024-04-16 12:49:34.026809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.015 [2024-04-16 12:49:34.027056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.016 [2024-04-16 12:49:34.027299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.016 [2024-04-16 12:49:34.027328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.016 [2024-04-16 12:49:34.027343] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.016 [2024-04-16 12:49:34.030901] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.016 [2024-04-16 12:49:34.040111] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.016 [2024-04-16 12:49:34.040522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.016 [2024-04-16 12:49:34.040751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.016 [2024-04-16 12:49:34.040783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.016 [2024-04-16 12:49:34.040801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.016 [2024-04-16 12:49:34.041040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.016 [2024-04-16 12:49:34.041282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.016 [2024-04-16 12:49:34.041306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.016 [2024-04-16 12:49:34.041322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.016 [2024-04-16 12:49:34.044879] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.016 [2024-04-16 12:49:34.054096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.016 [2024-04-16 12:49:34.054558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.016 [2024-04-16 12:49:34.054730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.016 [2024-04-16 12:49:34.054759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.016 [2024-04-16 12:49:34.054777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.016 [2024-04-16 12:49:34.055015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.016 [2024-04-16 12:49:34.055257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.016 [2024-04-16 12:49:34.055282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.016 [2024-04-16 12:49:34.055298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.016 [2024-04-16 12:49:34.058859] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.016 [2024-04-16 12:49:34.068084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.016 [2024-04-16 12:49:34.068552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.016 [2024-04-16 12:49:34.068740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.016 [2024-04-16 12:49:34.068770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.016 [2024-04-16 12:49:34.068788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.016 [2024-04-16 12:49:34.069025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.016 [2024-04-16 12:49:34.069285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.016 [2024-04-16 12:49:34.069309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.016 [2024-04-16 12:49:34.069325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.016 [2024-04-16 12:49:34.072881] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.082085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.082590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.082739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.082769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.082787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.083024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.083266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.275 [2024-04-16 12:49:34.083291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.275 [2024-04-16 12:49:34.083307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.275 [2024-04-16 12:49:34.086871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.096080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.096503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.096682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.096712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.096730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.096966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.097209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.275 [2024-04-16 12:49:34.097233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.275 [2024-04-16 12:49:34.097249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.275 [2024-04-16 12:49:34.100806] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.110032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.110512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.110725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.110755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.110774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.111013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.111256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.275 [2024-04-16 12:49:34.111287] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.275 [2024-04-16 12:49:34.111303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.275 [2024-04-16 12:49:34.114868] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.123864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.124335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.124620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.124650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.124667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.124905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.125147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.275 [2024-04-16 12:49:34.125172] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.275 [2024-04-16 12:49:34.125187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.275 [2024-04-16 12:49:34.128740] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.137731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.138201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.138404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.138432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.138451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.138697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.138940] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.275 [2024-04-16 12:49:34.138966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.275 [2024-04-16 12:49:34.138982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.275 [2024-04-16 12:49:34.142528] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.151742] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.152230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.152410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.152439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.152457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.152706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.152949] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.275 [2024-04-16 12:49:34.152974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.275 [2024-04-16 12:49:34.152996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.275 [2024-04-16 12:49:34.156541] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.165745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.166248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.166456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.166486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.166504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.166752] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.166996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.275 [2024-04-16 12:49:34.167021] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.275 [2024-04-16 12:49:34.167037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.275 [2024-04-16 12:49:34.170628] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.179617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.180068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.180274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.180303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.180321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.180559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.180814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.275 [2024-04-16 12:49:34.180840] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.275 [2024-04-16 12:49:34.180856] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.275 [2024-04-16 12:49:34.184399] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.275 [2024-04-16 12:49:34.193606] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.275 [2024-04-16 12:49:34.194082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.194289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.275 [2024-04-16 12:49:34.194319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.275 [2024-04-16 12:49:34.194337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.275 [2024-04-16 12:49:34.194586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.275 [2024-04-16 12:49:34.194829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.194854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.194870] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.198417] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.207408] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.207929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.208147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.208183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.208201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.208438] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.208691] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.208717] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.208733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.212278] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.221270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.221837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.222094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.222124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.222143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.222388] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.222645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.222672] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.222689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.226237] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.235232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.235754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.236027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.236057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.236074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.236312] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.236554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.236590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.236607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.240151] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.249148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.249635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.249818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.249847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.249865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.250107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.250350] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.250375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.250391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.253947] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.263147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.263658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.263875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.263904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.263922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.264161] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.264403] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.264429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.264445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.268005] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.276995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.277480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.277689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.277719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.277738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.277976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.278219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.278244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.278260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.281810] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.290805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.291320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.291579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.291609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.291627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.291865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.292108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.292133] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.292149] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.295702] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.304700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.305204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.305436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.305464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.305482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.305729] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.305972] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.305997] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.306013] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.309559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.318556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.319100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.319370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.276 [2024-04-16 12:49:34.319398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.276 [2024-04-16 12:49:34.319416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.276 [2024-04-16 12:49:34.319664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.276 [2024-04-16 12:49:34.319906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.276 [2024-04-16 12:49:34.319931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.276 [2024-04-16 12:49:34.319947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.276 [2024-04-16 12:49:34.323488] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.276 [2024-04-16 12:49:34.332491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.276 [2024-04-16 12:49:34.332990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.277 [2024-04-16 12:49:34.333331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.277 [2024-04-16 12:49:34.333383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.277 [2024-04-16 12:49:34.333403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.277 [2024-04-16 12:49:34.333655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.277 [2024-04-16 12:49:34.333897] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.277 [2024-04-16 12:49:34.333922] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.277 [2024-04-16 12:49:34.333938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.277 [2024-04-16 12:49:34.337483] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.536 [2024-04-16 12:49:34.346497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.536 [2024-04-16 12:49:34.347036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.536 [2024-04-16 12:49:34.347299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.536 [2024-04-16 12:49:34.347350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.536 [2024-04-16 12:49:34.347368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.536 [2024-04-16 12:49:34.347621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.536 [2024-04-16 12:49:34.347864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.536 [2024-04-16 12:49:34.347890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.536 [2024-04-16 12:49:34.347906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.536 [2024-04-16 12:49:34.351453] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.536 [2024-04-16 12:49:34.360512] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.536 [2024-04-16 12:49:34.360979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.536 [2024-04-16 12:49:34.361214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.536 [2024-04-16 12:49:34.361264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.536 [2024-04-16 12:49:34.361283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.536 [2024-04-16 12:49:34.361524] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.536 [2024-04-16 12:49:34.361780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.536 [2024-04-16 12:49:34.361805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.536 [2024-04-16 12:49:34.361821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.536 [2024-04-16 12:49:34.365371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.536 [2024-04-16 12:49:34.374377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.536 [2024-04-16 12:49:34.374789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.536 [2024-04-16 12:49:34.374996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.536 [2024-04-16 12:49:34.375048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.536 [2024-04-16 12:49:34.375072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.536 [2024-04-16 12:49:34.375310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.536 [2024-04-16 12:49:34.375553] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.536 [2024-04-16 12:49:34.375610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.536 [2024-04-16 12:49:34.375630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.536 [2024-04-16 12:49:34.379182] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.536 [2024-04-16 12:49:34.388185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.536 [2024-04-16 12:49:34.388626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.536 [2024-04-16 12:49:34.388787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.388816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.388834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.389072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.389315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.389339] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.389355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.392914] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.402123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.402637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.402807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.402836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.402854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.403091] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.403334] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.403360] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.403376] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.406938] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.415947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.416381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.416580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.416609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.416627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.416871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.417113] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.417138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.417153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.420705] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.429930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.430423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.430671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.430700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.430719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.430956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.431199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.431224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.431240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.434791] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.443791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.444260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.444452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.444481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.444499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.444746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.444989] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.445013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.445030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.448579] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.457794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.458217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.458403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.458451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.458470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.458718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.458967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.458992] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.459009] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.462552] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.471763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.472178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.472386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.472439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.472457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.472705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.472947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.472973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.472989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.476533] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.485738] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.486230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.486466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.486524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.486542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.486787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.487040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.487066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.487082] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.490635] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.499632] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.500148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.500437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.500487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.537 [2024-04-16 12:49:34.500506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.537 [2024-04-16 12:49:34.500752] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.537 [2024-04-16 12:49:34.500996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.537 [2024-04-16 12:49:34.501026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.537 [2024-04-16 12:49:34.501043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.537 [2024-04-16 12:49:34.504619] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.537 [2024-04-16 12:49:34.513621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.537 [2024-04-16 12:49:34.514129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.514424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.537 [2024-04-16 12:49:34.514476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.538 [2024-04-16 12:49:34.514494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.538 [2024-04-16 12:49:34.514741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.538 [2024-04-16 12:49:34.514984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.538 [2024-04-16 12:49:34.515009] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.538 [2024-04-16 12:49:34.515025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.538 [2024-04-16 12:49:34.518575] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.538 [2024-04-16 12:49:34.527580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.538 [2024-04-16 12:49:34.528082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.528320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.528371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.538 [2024-04-16 12:49:34.528389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.538 [2024-04-16 12:49:34.528637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.538 [2024-04-16 12:49:34.528880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.538 [2024-04-16 12:49:34.528905] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.538 [2024-04-16 12:49:34.528921] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.538 [2024-04-16 12:49:34.532459] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.538 [2024-04-16 12:49:34.541452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.538 [2024-04-16 12:49:34.541959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.542239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.542288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.538 [2024-04-16 12:49:34.542306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.538 [2024-04-16 12:49:34.542545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.538 [2024-04-16 12:49:34.542797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.538 [2024-04-16 12:49:34.542831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.538 [2024-04-16 12:49:34.542853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.538 [2024-04-16 12:49:34.546406] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.538 [2024-04-16 12:49:34.555419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.538 [2024-04-16 12:49:34.555873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.556062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.556112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.538 [2024-04-16 12:49:34.556130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.538 [2024-04-16 12:49:34.556367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.538 [2024-04-16 12:49:34.556622] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.538 [2024-04-16 12:49:34.556648] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.538 [2024-04-16 12:49:34.556663] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.538 [2024-04-16 12:49:34.560213] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.538 [2024-04-16 12:49:34.569429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.538 [2024-04-16 12:49:34.569952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.570255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.570307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.538 [2024-04-16 12:49:34.570325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.538 [2024-04-16 12:49:34.570581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.538 [2024-04-16 12:49:34.570826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.538 [2024-04-16 12:49:34.570851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.538 [2024-04-16 12:49:34.570867] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.538 [2024-04-16 12:49:34.574412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.538 [2024-04-16 12:49:34.583411] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.538 [2024-04-16 12:49:34.583974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.584250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.584303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.538 [2024-04-16 12:49:34.584332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.538 [2024-04-16 12:49:34.584583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.538 [2024-04-16 12:49:34.584827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.538 [2024-04-16 12:49:34.584853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.538 [2024-04-16 12:49:34.584869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.538 [2024-04-16 12:49:34.588420] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.538 [2024-04-16 12:49:34.597416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.538 [2024-04-16 12:49:34.597947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.598265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.538 [2024-04-16 12:49:34.598315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.538 [2024-04-16 12:49:34.598333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.538 [2024-04-16 12:49:34.598582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.538 [2024-04-16 12:49:34.598824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.538 [2024-04-16 12:49:34.598850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.538 [2024-04-16 12:49:34.598866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.538 [2024-04-16 12:49:34.602412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.611412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.611903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.612108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.612161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.612179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.612415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.612671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.612697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.612714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.616273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.625272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.625778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.626071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.626101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.626119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.626357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.626610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.626636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.626653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.630199] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.639196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.639701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.640028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.640079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.640098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.640336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.640590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.640616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.640632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.644178] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.653173] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.653789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.654122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.654175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.654195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.654440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.654702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.654729] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.654746] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.658298] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.667093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.667610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.667884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.667936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.667955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.668193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.668434] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.668460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.668476] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.672040] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.681065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.681576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.681822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.681872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.681891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.682128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.682372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.682397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.682414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.685973] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.694972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.695614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.695896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.695948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.695968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.696212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.696455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.696481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.696498] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.700060] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.708852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.709374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.709697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.709728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.709747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.709986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.710230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.710256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.710272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.713830] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.722833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.723345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.723667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.723703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.723723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.723961] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.724204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.724230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.724247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.727804] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.736806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.737270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.737597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.737625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.737642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.737881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.738124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.738151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.738167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.741725] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.750724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.751336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.751657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.751690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.751709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.751954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.752197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.752223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.752240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.755803] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.764597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.765204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.765484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.765534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.765559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.765822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.807 [2024-04-16 12:49:34.766067] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.807 [2024-04-16 12:49:34.766093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.807 [2024-04-16 12:49:34.766110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.807 [2024-04-16 12:49:34.769678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.807 [2024-04-16 12:49:34.778470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.807 [2024-04-16 12:49:34.778962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.779113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.807 [2024-04-16 12:49:34.779143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.807 [2024-04-16 12:49:34.779161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.807 [2024-04-16 12:49:34.779398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.808 [2024-04-16 12:49:34.779654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.808 [2024-04-16 12:49:34.779680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.808 [2024-04-16 12:49:34.779697] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.808 [2024-04-16 12:49:34.783243] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.808 [2024-04-16 12:49:34.792440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.808 [2024-04-16 12:49:34.792945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.793201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.793232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.808 [2024-04-16 12:49:34.793250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.808 [2024-04-16 12:49:34.793489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.808 [2024-04-16 12:49:34.793748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.808 [2024-04-16 12:49:34.793775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.808 [2024-04-16 12:49:34.793791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.808 [2024-04-16 12:49:34.797346] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.808 [2024-04-16 12:49:34.806360] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.808 [2024-04-16 12:49:34.806983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.807259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.807311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.808 [2024-04-16 12:49:34.807330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.808 [2024-04-16 12:49:34.807595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.808 [2024-04-16 12:49:34.807841] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.808 [2024-04-16 12:49:34.807867] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.808 [2024-04-16 12:49:34.807883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.808 [2024-04-16 12:49:34.811435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.808 [2024-04-16 12:49:34.820242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.808 [2024-04-16 12:49:34.820791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.821086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.821138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.808 [2024-04-16 12:49:34.821157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.808 [2024-04-16 12:49:34.821395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.808 [2024-04-16 12:49:34.821650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.808 [2024-04-16 12:49:34.821676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.808 [2024-04-16 12:49:34.821692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.808 [2024-04-16 12:49:34.825240] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.808 [2024-04-16 12:49:34.834246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.808 [2024-04-16 12:49:34.834731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.834905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.834957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.808 [2024-04-16 12:49:34.834975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.808 [2024-04-16 12:49:34.835218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.808 [2024-04-16 12:49:34.835461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.808 [2024-04-16 12:49:34.835486] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.808 [2024-04-16 12:49:34.835502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.808 [2024-04-16 12:49:34.839069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.808 [2024-04-16 12:49:34.848071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.808 [2024-04-16 12:49:34.848635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.848964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.849016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.808 [2024-04-16 12:49:34.849035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.808 [2024-04-16 12:49:34.849280] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.808 [2024-04-16 12:49:34.849531] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.808 [2024-04-16 12:49:34.849557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.808 [2024-04-16 12:49:34.849588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.808 [2024-04-16 12:49:34.853141] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.808 [2024-04-16 12:49:34.861936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.808 [2024-04-16 12:49:34.862555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.862923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.808 [2024-04-16 12:49:34.862976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:35.808 [2024-04-16 12:49:34.862995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:35.808 [2024-04-16 12:49:34.863241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:35.808 [2024-04-16 12:49:34.863486] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.808 [2024-04-16 12:49:34.863511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.808 [2024-04-16 12:49:34.863528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.808 [2024-04-16 12:49:34.867093] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.875893] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.876410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.876642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.876671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.876690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.876928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.877171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.877196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.877212] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.880770] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.889769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.890383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.890705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.890776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.890795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.891039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.891282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.891308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.891331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.894897] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.903704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.904146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.904347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.904377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.904396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.904647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.904892] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.904916] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.904943] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.908502] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.917519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.918195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.918485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.918517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.918537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.918794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.919048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.919075] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.919091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.922653] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.931443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.931994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.932253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.932283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.932302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.932540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.932796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.932821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.932837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.936395] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.945404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.945960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.946178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.946215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.946233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.946471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.946725] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.946750] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.946766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.950321] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.959331] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.959849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.960105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.960136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.960154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.960392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.960648] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.960674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.960691] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.964235] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.973242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.973746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.973961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.974009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.974027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.974274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.974516] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.974542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.974558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.978114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:34.987114] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:34.987775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.988089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:34.988144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:34.988162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:34.988406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:34.988669] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:34.988696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:34.988712] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.072 [2024-04-16 12:49:34.992263] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.072 [2024-04-16 12:49:35.001056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.072 [2024-04-16 12:49:35.001625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:35.001909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.072 [2024-04-16 12:49:35.001960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.072 [2024-04-16 12:49:35.001978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.072 [2024-04-16 12:49:35.002223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.072 [2024-04-16 12:49:35.002466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.072 [2024-04-16 12:49:35.002491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.072 [2024-04-16 12:49:35.002507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.073 [2024-04-16 12:49:35.006069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.073 [2024-04-16 12:49:35.015065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.073 [2024-04-16 12:49:35.015581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.015779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.015828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.073 [2024-04-16 12:49:35.015846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.073 [2024-04-16 12:49:35.016085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.073 [2024-04-16 12:49:35.016329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.073 [2024-04-16 12:49:35.016355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.073 [2024-04-16 12:49:35.016371] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.073 [2024-04-16 12:49:35.019933] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1265226 Killed "${NVMF_APP[@]}" "$@" 00:21:36.073 12:49:35 -- host/bdevperf.sh@36 -- # tgt_init 00:21:36.073 12:49:35 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:36.073 12:49:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:36.073 12:49:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:36.073 12:49:35 -- common/autotest_common.sh@10 -- # set +x 00:21:36.073 12:49:35 -- nvmf/common.sh@470 -- # nvmfpid=1266194 00:21:36.073 12:49:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:36.073 12:49:35 -- nvmf/common.sh@471 -- # waitforlisten 1266194 00:21:36.073 12:49:35 -- common/autotest_common.sh@817 -- # '[' -z 1266194 ']' 00:21:36.073 12:49:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.073 12:49:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:36.073 12:49:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.073 12:49:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:36.073 12:49:35 -- common/autotest_common.sh@10 -- # set +x 00:21:36.073 [2024-04-16 12:49:35.028953] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.073 [2024-04-16 12:49:35.029406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.029627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.029658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.073 [2024-04-16 12:49:35.029677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.073 [2024-04-16 12:49:35.029916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.073 [2024-04-16 12:49:35.030160] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.073 [2024-04-16 12:49:35.030185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.073 [2024-04-16 12:49:35.030201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.073 [2024-04-16 12:49:35.033754] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.073 [2024-04-16 12:49:35.042954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.073 [2024-04-16 12:49:35.043357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.043623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.043653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.073 [2024-04-16 12:49:35.043671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.073 [2024-04-16 12:49:35.043909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.073 [2024-04-16 12:49:35.044151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.073 [2024-04-16 12:49:35.044175] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.073 [2024-04-16 12:49:35.044191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.073 [2024-04-16 12:49:35.047745] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.073 [2024-04-16 12:49:35.056961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.073 [2024-04-16 12:49:35.057378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.057619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.057655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.073 [2024-04-16 12:49:35.057674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.073 [2024-04-16 12:49:35.057912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.073 [2024-04-16 12:49:35.058157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.073 [2024-04-16 12:49:35.058182] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.073 [2024-04-16 12:49:35.058198] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.073 [2024-04-16 12:49:35.061755] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.073 [2024-04-16 12:49:35.070968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.073 [2024-04-16 12:49:35.071377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.071532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.071573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.073 [2024-04-16 12:49:35.071593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.073 [2024-04-16 12:49:35.071832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.073 [2024-04-16 12:49:35.072081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.073 [2024-04-16 12:49:35.072106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.073 [2024-04-16 12:49:35.072121] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.073 [2024-04-16 12:49:35.072689] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:36.073 [2024-04-16 12:49:35.072771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.073 [2024-04-16 12:49:35.075671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.073 [2024-04-16 12:49:35.084878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.073 [2024-04-16 12:49:35.085342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.085594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.085623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.073 [2024-04-16 12:49:35.085642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.073 [2024-04-16 12:49:35.085879] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.073 [2024-04-16 12:49:35.086122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.073 [2024-04-16 12:49:35.086146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.073 [2024-04-16 12:49:35.086162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.073 [2024-04-16 12:49:35.089712] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.073 [2024-04-16 12:49:35.098708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.073 [2024-04-16 12:49:35.099100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.099328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.073 [2024-04-16 12:49:35.099377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.073 [2024-04-16 12:49:35.099395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.073 [2024-04-16 12:49:35.099643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.073 [2024-04-16 12:49:35.099886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.073 [2024-04-16 12:49:35.099910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.074 [2024-04-16 12:49:35.099926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.074 [2024-04-16 12:49:35.103478] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.074 [2024-04-16 12:49:35.112689] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.074 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.074 [2024-04-16 12:49:35.113106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.074 [2024-04-16 12:49:35.113352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.074 [2024-04-16 12:49:35.113398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.074 [2024-04-16 12:49:35.113416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.074 [2024-04-16 12:49:35.113664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.074 [2024-04-16 12:49:35.113907] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.074 [2024-04-16 12:49:35.113931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.074 [2024-04-16 12:49:35.113948] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.074 [2024-04-16 12:49:35.117497] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.074 [2024-04-16 12:49:35.126526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.074 [2024-04-16 12:49:35.126967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.074 [2024-04-16 12:49:35.127143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.074 [2024-04-16 12:49:35.127172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.074 [2024-04-16 12:49:35.127190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.074 [2024-04-16 12:49:35.127430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.074 [2024-04-16 12:49:35.127682] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.074 [2024-04-16 12:49:35.127706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.074 [2024-04-16 12:49:35.127722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.074 [2024-04-16 12:49:35.131295] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.334 [2024-04-16 12:49:35.140733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.334 [2024-04-16 12:49:35.141153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.334 [2024-04-16 12:49:35.141317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.334 [2024-04-16 12:49:35.141346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.334 [2024-04-16 12:49:35.141364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.334 [2024-04-16 12:49:35.141621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.334 [2024-04-16 12:49:35.141864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.334 [2024-04-16 12:49:35.141888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.334 [2024-04-16 12:49:35.141905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.334 [2024-04-16 12:49:35.145452] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.334 [2024-04-16 12:49:35.154717] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.334 [2024-04-16 12:49:35.155132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.334 [2024-04-16 12:49:35.155318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.334 [2024-04-16 12:49:35.155353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.334 [2024-04-16 12:49:35.155371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.334 [2024-04-16 12:49:35.155620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.334 [2024-04-16 12:49:35.155863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.334 [2024-04-16 12:49:35.155890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.334 [2024-04-16 12:49:35.155906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.334 [2024-04-16 12:49:35.156476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:36.334 [2024-04-16 12:49:35.159453] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.334 [2024-04-16 12:49:35.168765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.334 [2024-04-16 12:49:35.169287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.334 [2024-04-16 12:49:35.169468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.169496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.169518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.335 [2024-04-16 12:49:35.169773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.335 [2024-04-16 12:49:35.170034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.335 [2024-04-16 12:49:35.170059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.335 [2024-04-16 12:49:35.170078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.335 [2024-04-16 12:49:35.173664] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.335 [2024-04-16 12:49:35.182666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.335 [2024-04-16 12:49:35.183200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.183403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.183445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.183464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.335 [2024-04-16 12:49:35.183712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.335 [2024-04-16 12:49:35.183955] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.335 [2024-04-16 12:49:35.183981] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.335 [2024-04-16 12:49:35.183999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.335 [2024-04-16 12:49:35.187556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.335 [2024-04-16 12:49:35.196596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.335 [2024-04-16 12:49:35.197140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.197379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.197410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.197429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.335 [2024-04-16 12:49:35.197677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.335 [2024-04-16 12:49:35.197920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.335 [2024-04-16 12:49:35.197946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.335 [2024-04-16 12:49:35.197962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.335 [2024-04-16 12:49:35.201504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.335 [2024-04-16 12:49:35.210503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.335 [2024-04-16 12:49:35.210892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.211083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.211112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.211130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.335 [2024-04-16 12:49:35.211368] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.335 [2024-04-16 12:49:35.211624] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.335 [2024-04-16 12:49:35.211651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.335 [2024-04-16 12:49:35.211668] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.335 [2024-04-16 12:49:35.215209] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.335 [2024-04-16 12:49:35.224435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.335 [2024-04-16 12:49:35.224907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.225049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.225079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.225122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.335 [2024-04-16 12:49:35.225372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.335 [2024-04-16 12:49:35.225628] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.335 [2024-04-16 12:49:35.225653] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.335 [2024-04-16 12:49:35.225671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.335 [2024-04-16 12:49:35.229248] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.335 [2024-04-16 12:49:35.238460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.335 [2024-04-16 12:49:35.239010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.239252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.239281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.239301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.335 [2024-04-16 12:49:35.239542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.335 [2024-04-16 12:49:35.239806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.335 [2024-04-16 12:49:35.239831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.335 [2024-04-16 12:49:35.239848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.335 [2024-04-16 12:49:35.243403] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.335 [2024-04-16 12:49:35.252391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.335 [2024-04-16 12:49:35.252820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.253015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.253054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.253073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.335 [2024-04-16 12:49:35.253311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.335 [2024-04-16 12:49:35.253554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.335 [2024-04-16 12:49:35.253589] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.335 [2024-04-16 12:49:35.253606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.335 [2024-04-16 12:49:35.257148] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.335 [2024-04-16 12:49:35.266346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.335 [2024-04-16 12:49:35.266875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.267056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.267086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.267106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.335 [2024-04-16 12:49:35.267362] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.335 [2024-04-16 12:49:35.267615] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.335 [2024-04-16 12:49:35.267641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.335 [2024-04-16 12:49:35.267657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.335 [2024-04-16 12:49:35.271211] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.335 [2024-04-16 12:49:35.276567] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.335 [2024-04-16 12:49:35.276608] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.335 [2024-04-16 12:49:35.276627] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.335 [2024-04-16 12:49:35.276641] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.335 [2024-04-16 12:49:35.276653] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.335 [2024-04-16 12:49:35.276735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.335 [2024-04-16 12:49:35.276795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.335 [2024-04-16 12:49:35.276790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.335 [2024-04-16 12:49:35.280227] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.335 [2024-04-16 12:49:35.280752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.280941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.335 [2024-04-16 12:49:35.280970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.335 [2024-04-16 12:49:35.280988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.336 [2024-04-16 12:49:35.281228] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.336 [2024-04-16 12:49:35.281472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.336 [2024-04-16 12:49:35.281498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.336 [2024-04-16 12:49:35.281514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.336 [2024-04-16 12:49:35.285066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.336 [2024-04-16 12:49:35.294087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.336 [2024-04-16 12:49:35.294703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.294864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.294893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.336 [2024-04-16 12:49:35.294914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.336 [2024-04-16 12:49:35.295171] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.336 [2024-04-16 12:49:35.295418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.336 [2024-04-16 12:49:35.295444] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.336 [2024-04-16 12:49:35.295462] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.336 [2024-04-16 12:49:35.299053] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.336 [2024-04-16 12:49:35.308073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.336 [2024-04-16 12:49:35.308696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.308913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.308943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.336 [2024-04-16 12:49:35.308975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.336 [2024-04-16 12:49:35.309225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.336 [2024-04-16 12:49:35.309472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.336 [2024-04-16 12:49:35.309498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.336 [2024-04-16 12:49:35.309516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.336 [2024-04-16 12:49:35.313072] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.336 [2024-04-16 12:49:35.322080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.336 [2024-04-16 12:49:35.322646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.322853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.322894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.336 [2024-04-16 12:49:35.322915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.336 [2024-04-16 12:49:35.323172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.336 [2024-04-16 12:49:35.323419] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.336 [2024-04-16 12:49:35.323445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.336 [2024-04-16 12:49:35.323463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.336 [2024-04-16 12:49:35.327022] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.336 [2024-04-16 12:49:35.336033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.336 [2024-04-16 12:49:35.336617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.336837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.336877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.336 [2024-04-16 12:49:35.336896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.336 [2024-04-16 12:49:35.337123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.336 [2024-04-16 12:49:35.337338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.336 [2024-04-16 12:49:35.337359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.336 [2024-04-16 12:49:35.337375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.336 [2024-04-16 12:49:35.340614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.336 [2024-04-16 12:49:35.349734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.336 [2024-04-16 12:49:35.350309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.350571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.350598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.336 [2024-04-16 12:49:35.350617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.336 [2024-04-16 12:49:35.350839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.336 [2024-04-16 12:49:35.351078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.336 [2024-04-16 12:49:35.351100] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.336 [2024-04-16 12:49:35.351115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.336 [2024-04-16 12:49:35.354346] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.336 [2024-04-16 12:49:35.363243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.336 [2024-04-16 12:49:35.363739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.363920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.363945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.336 [2024-04-16 12:49:35.363963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.336 [2024-04-16 12:49:35.364180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.336 [2024-04-16 12:49:35.364392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.336 [2024-04-16 12:49:35.364413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.336 [2024-04-16 12:49:35.364428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.336 [2024-04-16 12:49:35.367535] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.336 [2024-04-16 12:49:35.376851] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.336 [2024-04-16 12:49:35.377274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.377473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.336 [2024-04-16 12:49:35.377497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.336 [2024-04-16 12:49:35.377512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.336 [2024-04-16 12:49:35.377751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.336 [2024-04-16 12:49:35.377982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.337 [2024-04-16 12:49:35.378003] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.337 [2024-04-16 12:49:35.378016] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.337 [2024-04-16 12:49:35.381156] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.337 [2024-04-16 12:49:35.390268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.337 [2024-04-16 12:49:35.390687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.337 [2024-04-16 12:49:35.390874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.337 [2024-04-16 12:49:35.390899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.337 [2024-04-16 12:49:35.390914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.337 [2024-04-16 12:49:35.391121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.337 [2024-04-16 12:49:35.391331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.337 [2024-04-16 12:49:35.391351] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.337 [2024-04-16 12:49:35.391364] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.337 [2024-04-16 12:49:35.394467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.596 [2024-04-16 12:49:35.403939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.596 [2024-04-16 12:49:35.404361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.596 [2024-04-16 12:49:35.404549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.596 [2024-04-16 12:49:35.404581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.596 [2024-04-16 12:49:35.404597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.596 [2024-04-16 12:49:35.404804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.596 [2024-04-16 12:49:35.405025] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.596 [2024-04-16 12:49:35.405045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.596 [2024-04-16 12:49:35.405059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.596 [2024-04-16 12:49:35.408375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.596 [2024-04-16 12:49:35.417418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.596 [2024-04-16 12:49:35.417845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.596 [2024-04-16 12:49:35.418095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.596 [2024-04-16 12:49:35.418119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.596 [2024-04-16 12:49:35.418134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.418340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.418575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.418596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.418610] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.421793] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.430932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.431366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.431502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.431542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.431558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.431795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.432023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.432043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.432056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.435237] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.444423] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.444918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.445093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.445117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.445132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.445338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.445548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.445576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.445591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.448845] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.457980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.458407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.458605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.458631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.458646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.458852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.459062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.459082] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.459096] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.462272] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.471414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.471870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.472044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.472068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.472089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.472314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.472546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.472579] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.472608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.475794] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.484935] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.485422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.485622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.485647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.485662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.485868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.486078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.486098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.486111] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.489286] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.498365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.498786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.499006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.499030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.499045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.499251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.499461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.499481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.499494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.502657] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.511949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.512374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.512560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.512591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.512607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.512825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.513053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.513073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.513087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.516224] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.525399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.525792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.525916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.525941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.525956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.526162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.597 [2024-04-16 12:49:35.526372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.597 [2024-04-16 12:49:35.526393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.597 [2024-04-16 12:49:35.526406] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.597 [2024-04-16 12:49:35.529465] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.597 [2024-04-16 12:49:35.538974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.597 [2024-04-16 12:49:35.539461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.539619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.597 [2024-04-16 12:49:35.539645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.597 [2024-04-16 12:49:35.539661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.597 [2024-04-16 12:49:35.539888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.540122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.540143] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.540157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.543357] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.552489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.552940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.553187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.553212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.553228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.553441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.553678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.553706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.553719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.556940] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.566045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.566533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.566689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.566714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.566730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.566943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.567161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.567182] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.567195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.570411] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.579658] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.580070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.580293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.580318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.580333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.580546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.580773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.580795] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.580808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.584039] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.593184] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.593666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.593847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.593872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.593887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.594093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.594303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.594329] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.594343] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.597532] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.606641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.607030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.607204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.607229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.607251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.607457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.607697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.607720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.607733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.610892] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.620162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.620655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.620825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.620850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.620865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.621079] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.621296] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.621317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.621331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.624534] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.633698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.634110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.634360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.634385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.634401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.634624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.634841] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.634862] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.634881] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.638126] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.647218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.647636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.647789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.647814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.647830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.648052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.648263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.598 [2024-04-16 12:49:35.648283] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.598 [2024-04-16 12:49:35.648296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.598 [2024-04-16 12:49:35.651328] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.598 [2024-04-16 12:49:35.660719] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.598 [2024-04-16 12:49:35.661204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.661437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.598 [2024-04-16 12:49:35.661462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.598 [2024-04-16 12:49:35.661476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.598 [2024-04-16 12:49:35.661715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.598 [2024-04-16 12:49:35.661933] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.599 [2024-04-16 12:49:35.661954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.599 [2024-04-16 12:49:35.661968] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.858 [2024-04-16 12:49:35.665271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.858 [2024-04-16 12:49:35.674282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.858 [2024-04-16 12:49:35.674760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.858 [2024-04-16 12:49:35.674927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.858 [2024-04-16 12:49:35.674952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.858 [2024-04-16 12:49:35.674967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.858 [2024-04-16 12:49:35.675173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.858 [2024-04-16 12:49:35.675383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.858 [2024-04-16 12:49:35.675403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.858 [2024-04-16 12:49:35.675416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.858 [2024-04-16 12:49:35.678557] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.858 [2024-04-16 12:49:35.687872] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.858 [2024-04-16 12:49:35.688318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.858 [2024-04-16 12:49:35.688496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.858 [2024-04-16 12:49:35.688521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.858 [2024-04-16 12:49:35.688536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.858 [2024-04-16 12:49:35.688773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.858 [2024-04-16 12:49:35.689001] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.858 [2024-04-16 12:49:35.689022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.858 [2024-04-16 12:49:35.689035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.858 [2024-04-16 12:49:35.692172] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.858 [2024-04-16 12:49:35.701333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.858 [2024-04-16 12:49:35.701785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.858 [2024-04-16 12:49:35.701940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.858 [2024-04-16 12:49:35.701964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.858 [2024-04-16 12:49:35.701979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.858 [2024-04-16 12:49:35.702191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.858 [2024-04-16 12:49:35.702401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.858 [2024-04-16 12:49:35.702421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.858 [2024-04-16 12:49:35.702434] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.858 [2024-04-16 12:49:35.705533] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.858 [2024-04-16 12:49:35.714926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.858 [2024-04-16 12:49:35.715401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.858 [2024-04-16 12:49:35.715625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.715651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.715666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.715872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.716082] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.716102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.716115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.719295] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.728427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.728892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.729047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.729079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.729094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.729301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.729510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.729531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.729544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.732715] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.742066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.742489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.742696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.742723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.742738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.742965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.743175] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.743195] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.743208] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.746348] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.755483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.756005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.756179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.756204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.756219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.756425] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.756645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.756666] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.756679] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.759797] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.769096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.769520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.769727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.769753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.769769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.769996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.770207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.770227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.770240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.773474] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.782657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.783150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.783295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.783319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.783334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.783540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.783780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.783802] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.783816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.787011] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.796145] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.796614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.796776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.796800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.796815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.797041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.797273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.797294] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.797307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.800491] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.809613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.810037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.810192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.810217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.810239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.810453] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.810688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.810710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.810724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.813942] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.823254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.823740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.823885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.823909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.823924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.824131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.824341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.859 [2024-04-16 12:49:35.824361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.859 [2024-04-16 12:49:35.824374] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.859 [2024-04-16 12:49:35.827515] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.859 [2024-04-16 12:49:35.836778] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.859 [2024-04-16 12:49:35.837216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.837476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.859 [2024-04-16 12:49:35.837500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.859 [2024-04-16 12:49:35.837515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.859 [2024-04-16 12:49:35.837761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.859 [2024-04-16 12:49:35.837992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.860 [2024-04-16 12:49:35.838013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.860 [2024-04-16 12:49:35.838026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.860 [2024-04-16 12:49:35.841196] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.860 [2024-04-16 12:49:35.850333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.860 [2024-04-16 12:49:35.850774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.850982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.851006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.860 [2024-04-16 12:49:35.851022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.860 [2024-04-16 12:49:35.851241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.860 [2024-04-16 12:49:35.851451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.860 [2024-04-16 12:49:35.851472] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.860 [2024-04-16 12:49:35.851484] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.860 [2024-04-16 12:49:35.854651] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.860 [2024-04-16 12:49:35.863753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.860 [2024-04-16 12:49:35.864110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.864296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.864320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.860 [2024-04-16 12:49:35.864335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.860 [2024-04-16 12:49:35.864541] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.860 [2024-04-16 12:49:35.864783] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.860 [2024-04-16 12:49:35.864804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.860 [2024-04-16 12:49:35.864817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.860 [2024-04-16 12:49:35.867975] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.860 [2024-04-16 12:49:35.877209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.860 [2024-04-16 12:49:35.877625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.877831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.877855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.860 [2024-04-16 12:49:35.877870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.860 [2024-04-16 12:49:35.878077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.860 [2024-04-16 12:49:35.878287] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.860 [2024-04-16 12:49:35.878308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.860 [2024-04-16 12:49:35.878321] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.860 [2024-04-16 12:49:35.881460] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.860 [2024-04-16 12:49:35.890781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.860 [2024-04-16 12:49:35.891190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.891360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.891384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.860 [2024-04-16 12:49:35.891399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.860 [2024-04-16 12:49:35.891615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.860 [2024-04-16 12:49:35.891831] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.860 [2024-04-16 12:49:35.891852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.860 [2024-04-16 12:49:35.891865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.860 [2024-04-16 12:49:35.895043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.860 [2024-04-16 12:49:35.904212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.860 [2024-04-16 12:49:35.904696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.904842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.904882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.860 [2024-04-16 12:49:35.904898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.860 [2024-04-16 12:49:35.905104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.860 [2024-04-16 12:49:35.905325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.860 [2024-04-16 12:49:35.905345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.860 [2024-04-16 12:49:35.905358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.860 [2024-04-16 12:49:35.908463] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.860 [2024-04-16 12:49:35.917621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.860 [2024-04-16 12:49:35.918032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.918191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.860 [2024-04-16 12:49:35.918216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:36.860 [2024-04-16 12:49:35.918231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:36.860 [2024-04-16 12:49:35.918437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:36.860 [2024-04-16 12:49:35.918677] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.860 [2024-04-16 12:49:35.918699] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.860 [2024-04-16 12:49:35.918712] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.860 [2024-04-16 12:49:35.921883] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.121 [2024-04-16 12:49:35.931121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.121 [2024-04-16 12:49:35.931514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.931704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.931730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.121 [2024-04-16 12:49:35.931745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.121 [2024-04-16 12:49:35.931960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.121 [2024-04-16 12:49:35.932177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.121 [2024-04-16 12:49:35.932206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.121 [2024-04-16 12:49:35.932221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.121 [2024-04-16 12:49:35.935436] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.121 [2024-04-16 12:49:35.944674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.121 [2024-04-16 12:49:35.945123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.945278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.945302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.121 [2024-04-16 12:49:35.945318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.121 [2024-04-16 12:49:35.945524] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.121 [2024-04-16 12:49:35.945765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.121 [2024-04-16 12:49:35.945787] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.121 [2024-04-16 12:49:35.945800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.121 [2024-04-16 12:49:35.949034] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.121 [2024-04-16 12:49:35.958160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.121 [2024-04-16 12:49:35.958581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.958778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.958803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.121 [2024-04-16 12:49:35.958819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.121 [2024-04-16 12:49:35.959041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.121 [2024-04-16 12:49:35.959261] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.121 [2024-04-16 12:49:35.959281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.121 [2024-04-16 12:49:35.959294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.121 [2024-04-16 12:49:35.962399] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.121 [2024-04-16 12:49:35.971535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.121 [2024-04-16 12:49:35.971969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.972121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.972146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.121 [2024-04-16 12:49:35.972161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.121 [2024-04-16 12:49:35.972367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.121 [2024-04-16 12:49:35.972604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.121 [2024-04-16 12:49:35.972626] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.121 [2024-04-16 12:49:35.972645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.121 [2024-04-16 12:49:35.975852] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.121 [2024-04-16 12:49:35.984985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.121 [2024-04-16 12:49:35.985466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.985642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.985669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.121 [2024-04-16 12:49:35.985684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.121 [2024-04-16 12:49:35.985916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.121 [2024-04-16 12:49:35.986127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.121 [2024-04-16 12:49:35.986147] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.121 [2024-04-16 12:49:35.986160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.121 [2024-04-16 12:49:35.989336] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.121 [2024-04-16 12:49:35.998460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.121 [2024-04-16 12:49:35.998892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.999070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:35.999095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.121 [2024-04-16 12:49:35.999111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.121 [2024-04-16 12:49:35.999324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.121 [2024-04-16 12:49:35.999541] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.121 [2024-04-16 12:49:35.999570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.121 [2024-04-16 12:49:35.999585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.121 [2024-04-16 12:49:36.002806] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.121 [2024-04-16 12:49:36.012126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.121 [2024-04-16 12:49:36.012620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:36.012798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:36.012823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.121 [2024-04-16 12:49:36.012839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.121 [2024-04-16 12:49:36.013053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.121 [2024-04-16 12:49:36.013270] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.121 [2024-04-16 12:49:36.013291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.121 [2024-04-16 12:49:36.013304] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.121 12:49:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:37.121 12:49:36 -- common/autotest_common.sh@850 -- # return 0 00:21:37.121 12:49:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:37.121 12:49:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:37.121 12:49:36 -- common/autotest_common.sh@10 -- # set +x 00:21:37.121 [2024-04-16 12:49:36.016628] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.121 [2024-04-16 12:49:36.025654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.121 [2024-04-16 12:49:36.026103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:36.026289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.121 [2024-04-16 12:49:36.026314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.121 [2024-04-16 12:49:36.026329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.121 [2024-04-16 12:49:36.026535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.121 [2024-04-16 12:49:36.026776] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.121 [2024-04-16 12:49:36.026798] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.121 [2024-04-16 12:49:36.026812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.122 [2024-04-16 12:49:36.029971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.122 12:49:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.122 12:49:36 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.122 12:49:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.122 12:49:36 -- common/autotest_common.sh@10 -- # set +x 00:21:37.122 [2024-04-16 12:49:36.034894] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.122 [2024-04-16 12:49:36.039157] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.122 [2024-04-16 12:49:36.039661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.039801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.039827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.122 [2024-04-16 12:49:36.039842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.122 [2024-04-16 12:49:36.040084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.122 [2024-04-16 12:49:36.040301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.122 [2024-04-16 12:49:36.040322] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.122 [2024-04-16 12:49:36.040336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.122 12:49:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.122 12:49:36 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.122 12:49:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.122 12:49:36 -- common/autotest_common.sh@10 -- # set +x 00:21:37.122 [2024-04-16 12:49:36.043616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.122 [2024-04-16 12:49:36.052712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.122 [2024-04-16 12:49:36.053145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.053332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.053361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.122 [2024-04-16 12:49:36.053376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.122 [2024-04-16 12:49:36.053603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.122 [2024-04-16 12:49:36.053821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.122 [2024-04-16 12:49:36.053842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.122 [2024-04-16 12:49:36.053871] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.122 [2024-04-16 12:49:36.057066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.122 [2024-04-16 12:49:36.066133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.122 [2024-04-16 12:49:36.066658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.066860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.066886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.122 [2024-04-16 12:49:36.066905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.122 [2024-04-16 12:49:36.067126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.122 [2024-04-16 12:49:36.067347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.122 [2024-04-16 12:49:36.067369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.122 [2024-04-16 12:49:36.067384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.122 [2024-04-16 12:49:36.070634] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.122 Malloc0 00:21:37.122 12:49:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.122 12:49:36 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.122 12:49:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.122 12:49:36 -- common/autotest_common.sh@10 -- # set +x 00:21:37.122 [2024-04-16 12:49:36.079841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.122 [2024-04-16 12:49:36.080316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.080524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.080583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.122 [2024-04-16 12:49:36.080602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.122 [2024-04-16 12:49:36.080828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.122 [2024-04-16 12:49:36.081057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.122 [2024-04-16 12:49:36.081078] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.122 [2024-04-16 12:49:36.081092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.122 [2024-04-16 12:49:36.084315] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.122 12:49:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.122 12:49:36 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.122 12:49:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.122 12:49:36 -- common/autotest_common.sh@10 -- # set +x 00:21:37.122 12:49:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.122 12:49:36 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.122 12:49:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.122 12:49:36 -- common/autotest_common.sh@10 -- # set +x 00:21:37.122 [2024-04-16 12:49:36.093349] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.122 [2024-04-16 12:49:36.093825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.094036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.122 [2024-04-16 12:49:36.094062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x930f00 with addr=10.0.0.2, port=4420 00:21:37.122 [2024-04-16 12:49:36.094078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x930f00 is same with the state(5) to be set 00:21:37.122 [2024-04-16 12:49:36.094297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x930f00 (9): Bad file descriptor 00:21:37.122 [2024-04-16 12:49:36.094525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.122 [2024-04-16 12:49:36.094545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.122 [2024-04-16 12:49:36.094583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.122 [2024-04-16 12:49:36.096539] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.122 [2024-04-16 12:49:36.097805] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.122 12:49:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.122 12:49:36 -- host/bdevperf.sh@38 -- # wait 1265527 00:21:37.122 [2024-04-16 12:49:36.106833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.122 [2024-04-16 12:49:36.183923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:47.097 00:21:47.097 Latency(us) 00:21:47.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.097 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:47.097 Verification LBA range: start 0x0 length 0x4000 00:21:47.097 Nvme1n1 : 15.01 6236.18 24.36 10240.58 0.00 7743.16 825.27 25049.32 00:21:47.097 =================================================================================================================== 00:21:47.097 Total : 6236.18 24.36 10240.58 0.00 7743.16 825.27 25049.32 00:21:47.097 12:49:44 -- host/bdevperf.sh@39 -- # sync 00:21:47.097 12:49:44 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.097 12:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.097 12:49:44 -- common/autotest_common.sh@10 -- # set +x 00:21:47.097 12:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.097 12:49:44 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:21:47.097 12:49:44 -- host/bdevperf.sh@44 -- # nvmftestfini 00:21:47.097 12:49:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:47.097 12:49:44 -- nvmf/common.sh@117 -- # sync 00:21:47.097 12:49:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.097 12:49:44 -- nvmf/common.sh@120 -- # set +e 00:21:47.097 12:49:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.097 12:49:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.097 rmmod nvme_tcp 00:21:47.097 rmmod nvme_fabrics 00:21:47.097 rmmod nvme_keyring 00:21:47.097 12:49:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.097 12:49:44 -- nvmf/common.sh@124 -- # set -e 00:21:47.097 12:49:44 -- nvmf/common.sh@125 -- # return 0 00:21:47.097 12:49:44 -- nvmf/common.sh@478 -- # '[' -n 1266194 ']' 00:21:47.097 12:49:44 -- nvmf/common.sh@479 -- # killprocess 1266194 00:21:47.097 12:49:44 -- common/autotest_common.sh@936 -- # '[' -z 1266194 ']' 00:21:47.097 12:49:44 -- common/autotest_common.sh@940 -- # kill -0 1266194 00:21:47.097 12:49:44 -- common/autotest_common.sh@941 -- # uname 00:21:47.097 12:49:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:47.097 12:49:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1266194 00:21:47.097 12:49:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:47.097 12:49:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:47.097 12:49:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1266194' 00:21:47.097 killing process with pid 1266194 00:21:47.097 12:49:44 -- common/autotest_common.sh@955 -- # kill 1266194 00:21:47.097 12:49:44 -- common/autotest_common.sh@960 -- # wait 1266194 00:21:47.097 12:49:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:47.097 12:49:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:47.097 12:49:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:47.097 12:49:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.097 12:49:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.097 12:49:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.097 12:49:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.097 12:49:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.506 12:49:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.506 00:21:48.506 real 0m23.917s 00:21:48.506 user 1m3.132s 00:21:48.506 sys 0m4.863s 00:21:48.506 12:49:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:48.507 12:49:47 -- common/autotest_common.sh@10 -- # set +x 00:21:48.507 ************************************ 00:21:48.507 END TEST nvmf_bdevperf 00:21:48.507 ************************************ 00:21:48.507 12:49:47 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:21:48.507 12:49:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:48.507 12:49:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:48.507 12:49:47 -- common/autotest_common.sh@10 -- # set +x 00:21:48.507 ************************************ 00:21:48.507 START TEST nvmf_target_disconnect 00:21:48.507 ************************************ 00:21:48.507 12:49:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:21:48.507 * Looking for test storage... 00:21:48.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.507 12:49:47 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.507 12:49:47 -- nvmf/common.sh@7 -- # uname -s 00:21:48.507 12:49:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.507 12:49:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.507 12:49:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.507 12:49:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.507 12:49:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.507 12:49:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.507 12:49:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.507 12:49:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.507 12:49:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.507 12:49:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.507 12:49:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:48.507 12:49:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:48.507 12:49:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.507 12:49:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.507 12:49:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.507 12:49:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.507 12:49:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.507 12:49:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.507 12:49:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.507 12:49:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.507 12:49:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.507 12:49:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.507 12:49:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.507 12:49:47 -- paths/export.sh@5 -- # export PATH 00:21:48.507 12:49:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.507 12:49:47 -- nvmf/common.sh@47 -- # : 0 00:21:48.507 12:49:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.507 12:49:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.507 12:49:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.507 12:49:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.507 12:49:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.507 12:49:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.507 12:49:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.507 12:49:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.507 12:49:47 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:48.507 12:49:47 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:21:48.507 12:49:47 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:21:48.507 12:49:47 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:21:48.507 12:49:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:48.507 12:49:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.507 12:49:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:48.507 12:49:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:48.507 12:49:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:48.507 12:49:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.507 12:49:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.507 12:49:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.507 12:49:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:48.507 12:49:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:48.507 12:49:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.507 12:49:47 -- common/autotest_common.sh@10 -- # set +x 00:21:51.036 12:49:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:51.036 12:49:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.036 12:49:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.036 12:49:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.036 12:49:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.036 12:49:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.036 12:49:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.036 12:49:49 -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.036 12:49:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.036 12:49:49 -- nvmf/common.sh@296 -- # e810=() 00:21:51.036 12:49:49 -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.036 12:49:49 -- nvmf/common.sh@297 -- # x722=() 00:21:51.036 12:49:49 -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.036 12:49:49 -- nvmf/common.sh@298 -- # mlx=() 00:21:51.036 12:49:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.036 12:49:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.036 12:49:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.036 12:49:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:51.036 12:49:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.036 12:49:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.036 12:49:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:21:51.036 Found 0000:82:00.0 (0x8086 - 0x159b) 00:21:51.036 12:49:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.036 12:49:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:21:51.036 Found 0000:82:00.1 (0x8086 - 0x159b) 00:21:51.036 12:49:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.036 12:49:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.036 12:49:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.036 12:49:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:51.036 12:49:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.036 12:49:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:21:51.036 Found net devices under 0000:82:00.0: cvl_0_0 00:21:51.036 12:49:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.036 12:49:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.036 12:49:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.036 12:49:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:51.036 12:49:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.036 12:49:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:21:51.036 Found net devices under 0000:82:00.1: cvl_0_1 00:21:51.036 12:49:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.036 12:49:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:51.036 12:49:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:51.036 12:49:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:51.036 12:49:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:51.036 12:49:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.036 12:49:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.036 12:49:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.036 12:49:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:51.036 12:49:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.036 12:49:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.036 12:49:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:51.036 12:49:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.036 12:49:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.036 12:49:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:51.036 12:49:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:51.036 12:49:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.036 12:49:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.036 12:49:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.036 12:49:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.036 12:49:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:51.036 12:49:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.036 12:49:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.036 12:49:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.036 12:49:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:51.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:21:51.036 00:21:51.036 --- 10.0.0.2 ping statistics --- 00:21:51.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.036 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:21:51.036 12:49:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:21:51.036 00:21:51.036 --- 10.0.0.1 ping statistics --- 00:21:51.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.036 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:51.036 12:49:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.036 12:49:50 -- nvmf/common.sh@411 -- # return 0 00:21:51.036 12:49:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:51.036 12:49:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.036 12:49:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:51.036 12:49:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:51.036 12:49:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.036 12:49:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:51.036 12:49:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:51.036 12:49:50 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:21:51.036 12:49:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:51.036 12:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.036 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:51.295 ************************************ 00:21:51.295 START TEST nvmf_target_disconnect_tc1 00:21:51.295 ************************************ 00:21:51.295 12:49:50 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:21:51.295 12:49:50 -- host/target_disconnect.sh@32 -- # set +e 00:21:51.295 12:49:50 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:51.295 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.295 [2024-04-16 12:49:50.289644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.295 [2024-04-16 12:49:50.289834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.295 [2024-04-16 12:49:50.289864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x162dff0 with addr=10.0.0.2, port=4420 00:21:51.295 [2024-04-16 12:49:50.289908] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:51.295 [2024-04-16 12:49:50.289932] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:51.295 [2024-04-16 12:49:50.289952] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:21:51.295 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:21:51.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:21:51.295 Initializing NVMe Controllers 00:21:51.295 12:49:50 -- host/target_disconnect.sh@33 -- # trap - ERR 00:21:51.295 12:49:50 -- host/target_disconnect.sh@33 -- # print_backtrace 00:21:51.295 12:49:50 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:21:51.295 12:49:50 -- common/autotest_common.sh@1139 -- # return 0 00:21:51.296 12:49:50 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:21:51.296 12:49:50 -- host/target_disconnect.sh@41 -- # set -e 00:21:51.296 00:21:51.296 real 0m0.105s 00:21:51.296 user 0m0.038s 00:21:51.296 sys 0m0.066s 00:21:51.296 12:49:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:51.296 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:51.296 ************************************ 00:21:51.296 END TEST nvmf_target_disconnect_tc1 00:21:51.296 ************************************ 00:21:51.296 12:49:50 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:21:51.296 12:49:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:51.296 12:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.296 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:51.554 ************************************ 00:21:51.554 START TEST nvmf_target_disconnect_tc2 00:21:51.554 ************************************ 00:21:51.554 12:49:50 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:21:51.554 12:49:50 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:21:51.554 12:49:50 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:51.554 12:49:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:51.554 12:49:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:51.554 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:51.554 12:49:50 -- nvmf/common.sh@470 -- # nvmfpid=1269778 00:21:51.555 12:49:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:51.555 12:49:50 -- nvmf/common.sh@471 -- # waitforlisten 1269778 00:21:51.555 12:49:50 -- common/autotest_common.sh@817 -- # '[' -z 1269778 ']' 00:21:51.555 12:49:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.555 12:49:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:51.555 12:49:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.555 12:49:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:51.555 12:49:50 -- common/autotest_common.sh@10 -- # set +x 00:21:51.555 [2024-04-16 12:49:50.471035] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:51.555 [2024-04-16 12:49:50.471138] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.555 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.555 [2024-04-16 12:49:50.547726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.813 [2024-04-16 12:49:50.657438] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.813 [2024-04-16 12:49:50.657497] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.813 [2024-04-16 12:49:50.657526] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.813 [2024-04-16 12:49:50.657537] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.813 [2024-04-16 12:49:50.657547] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.813 [2024-04-16 12:49:50.657937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:51.813 [2024-04-16 12:49:50.657997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:51.813 [2024-04-16 12:49:50.658063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:51.813 [2024-04-16 12:49:50.658066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:52.746 12:49:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:52.746 12:49:51 -- common/autotest_common.sh@850 -- # return 0 00:21:52.746 12:49:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:52.746 12:49:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:52.746 12:49:51 -- common/autotest_common.sh@10 -- # set +x 00:21:52.746 12:49:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.746 12:49:51 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.746 12:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.746 12:49:51 -- common/autotest_common.sh@10 -- # set +x 00:21:52.746 Malloc0 00:21:52.746 12:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.746 12:49:51 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:52.746 12:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.746 12:49:51 -- common/autotest_common.sh@10 -- # set +x 00:21:52.746 [2024-04-16 12:49:51.520023] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.746 12:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.746 12:49:51 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.746 12:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.746 12:49:51 -- common/autotest_common.sh@10 -- # set +x 00:21:52.746 12:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.746 12:49:51 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.746 12:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.746 12:49:51 -- common/autotest_common.sh@10 -- # set +x 00:21:52.746 12:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.746 12:49:51 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.746 12:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.746 12:49:51 -- common/autotest_common.sh@10 -- # set +x 00:21:52.746 [2024-04-16 12:49:51.548285] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.746 12:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.746 12:49:51 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:52.746 12:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.746 12:49:51 -- common/autotest_common.sh@10 -- # set +x 00:21:52.746 12:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.746 12:49:51 -- host/target_disconnect.sh@50 -- # reconnectpid=1269928 00:21:52.746 12:49:51 -- host/target_disconnect.sh@52 -- # sleep 2 00:21:52.746 12:49:51 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:52.746 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.654 12:49:53 -- host/target_disconnect.sh@53 -- # kill -9 1269778 00:21:54.654 12:49:53 -- host/target_disconnect.sh@55 -- # sleep 2 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 [2024-04-16 12:49:53.573369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 [2024-04-16 12:49:53.573720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Write completed with error (sct=0, sc=8) 00:21:54.654 starting I/O failed 00:21:54.654 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 [2024-04-16 12:49:53.574055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Write completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 Read completed with error (sct=0, sc=8) 00:21:54.655 starting I/O failed 00:21:54.655 [2024-04-16 12:49:53.574368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:54.655 [2024-04-16 12:49:53.574589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.574770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.574798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.574959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.575165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.575216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.575405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.575612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.575637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.575797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.575948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.576003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.576235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.576440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.576491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.576667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.576806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.576831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.576985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.577128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.577156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.577372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.577574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.577599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.577791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.577989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.578039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.578245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.578456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.578478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.578661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.578843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.578867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.579062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.579219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.579246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.579388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.579535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.579580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.579734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.579866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.579889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.580055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.580237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.580281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.580465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.580631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.580656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.580815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.580984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.581006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.581202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.581375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.581397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.655 qpair failed and we were unable to recover it. 00:21:54.655 [2024-04-16 12:49:53.581601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.655 [2024-04-16 12:49:53.581760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.581785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.581934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.582073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.582115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.582312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.582515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.582538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.582748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.582881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.582923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.583114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.583254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.583282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.583473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.583655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.583681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.583826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.584034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.584104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.584319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.584498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.584520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.584678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.584864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.584887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.585086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.585281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.585333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.585505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.585665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.585690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.585861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.586013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.586055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.586255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.586460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.586483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.586642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.586798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.586822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.587008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.587202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.587257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.587413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.587588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.587613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.587802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.588007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.588064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.588260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.588438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.588461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.588635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.588790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.588814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.589019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.589170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.589192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.589361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.589515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.589553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.589742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.589913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.589936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.590141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.590322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.590350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.590569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.590704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.590729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.590926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.591153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.591204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.591379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.591575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.591601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.591768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.591942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.591997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.656 [2024-04-16 12:49:53.592171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.592360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.656 [2024-04-16 12:49:53.592423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.656 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.592618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.592781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.592806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.592963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.593200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.593249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.593437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.593589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.593614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.593781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.594002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.594048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.594248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.594389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.594425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.594625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.594762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.594802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.594968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.595160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.595214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.595362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.595504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.595527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.595728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.595861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.595947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.596136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.596324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.596389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.596585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.596748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.596790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.596960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.597179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.597236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.597432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.597617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.597676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.597879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.598078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.598131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.598318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.598512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.598534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.598743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.598946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.599004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.599192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.599376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.599398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.599537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.599691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.599733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.599890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.600084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.600137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.600314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.600514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.600536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.600753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.600966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.601019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.601193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.601388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.601446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.601635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.601849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.601898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.602072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.602274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.602331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.602497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.602619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.602647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.602826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.603057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.603111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.603284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.603476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.603498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.657 [2024-04-16 12:49:53.603671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.603881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.657 [2024-04-16 12:49:53.603925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.657 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.604080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.604280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.604334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.604522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.604706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.604748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.604914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.605110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.605163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.605331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.605479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.605501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.605679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.605906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.605959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.606143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.606293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.606334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.606485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.606629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.606653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.606836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.607042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.607084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.607224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.607398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.607421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.607591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.607745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.607786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.607944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.608132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.608193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.608384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.608578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.608601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.608794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.609015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.609067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.609258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.609434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.609456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.609612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.609818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.609846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.610010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.610193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.610220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.610393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.610533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.610555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.610733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.610929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.610987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.611182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.611400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.611453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.611647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.611827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.611868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.612027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.612221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.612275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.612480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.612633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.612675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.612883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.613101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.658 [2024-04-16 12:49:53.613148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.658 qpair failed and we were unable to recover it. 00:21:54.658 [2024-04-16 12:49:53.613326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.613511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.613532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.613704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.613880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.613926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.614116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.614332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.614382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.614547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.614715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.614757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.614935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.615091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.615118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.615303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.615481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.615517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.615714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.615947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.616005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.616193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.616386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.616411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.616611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.616790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.616831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.617020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.617236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.617286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.617487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.617662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.617690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.617866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.618042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.618084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.618242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.618436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.618458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.618663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.618848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.618876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.619052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.619207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.619253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.619420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.619543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.619589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.619777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.619922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.619949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.620099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.620268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.620295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.620410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.620586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.620610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.620766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.620983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.621035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.621213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.621409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.621431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.621623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.621805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.621832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.622024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.622198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.622239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.622389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.622553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.622582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.622764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.622981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.623031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.623217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.623395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.623417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.623588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.623762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.623804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.624003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.624215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.624270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.659 [2024-04-16 12:49:53.624482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.624636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.659 [2024-04-16 12:49:53.624677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.659 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.624835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.625011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.625038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.625204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.625400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.625436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.625626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.625820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.625842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.626012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.626196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.626218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.626381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.626572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.626595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.626769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.626959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.627015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.627194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.627377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.627399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.627568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.627706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.627748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.627932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.628071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.628103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.628269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.628470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.628507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.628695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.628880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.628907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.629072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.629276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.629319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.629496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.629664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.629692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.629865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.630076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.630126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.630300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.630461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.630497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.630695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.630864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.630892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.631100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.631269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.631308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.631465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.631652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.631693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.631885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.632088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.632139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.632333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.632477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.632513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.632701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.632889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.632917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.633072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.633275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.633316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.633483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.633637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.633665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.633852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.634025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.634051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.634215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.634388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.634411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.634606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.634775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.634815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.635010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.635146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.635174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.635369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.635552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.635579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.660 qpair failed and we were unable to recover it. 00:21:54.660 [2024-04-16 12:49:53.635709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.660 [2024-04-16 12:49:53.635927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.635979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.636192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.636395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.636415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.636586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.636777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.636834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.637025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.637221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.637274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.637461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.637681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.637742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.637928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.638127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.638180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.638372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.638543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.638570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.638747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.638921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.638965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.639132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.639316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.639356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.639520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.639697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.639740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.639894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.640115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.640178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.640365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.640545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.640571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.640705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.640838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.640879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.641060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.641249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.641303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.641497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.641666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.641695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.641900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.642095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.642149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.642338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.642520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.642541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.642745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.642959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.643012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.643186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.643380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.643441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.643605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.643789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.643828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.643985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.644140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.644182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.644334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.644451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.644474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.644635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.644813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.644859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.645021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.645211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.645267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.645428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.645569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.645615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.645768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.645934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.645974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.646115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.646284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.646307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.646503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.646665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.646694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.646895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.647100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.647156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.661 qpair failed and we were unable to recover it. 00:21:54.661 [2024-04-16 12:49:53.647321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.661 [2024-04-16 12:49:53.647472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.647495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.647686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.647851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.647880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.648090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.648313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.648364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.648530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.648738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.648779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.648944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.649136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.649191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.649360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.649533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.649583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.649785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.650003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.650056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.650233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.650422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.650444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.650631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.650806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.650849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.651023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.651199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.651263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.651443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.651618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.651646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.651812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.651980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.652028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.652197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.652392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.652414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.652548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.652710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.652752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.652923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.653146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.653198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.653369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.653556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.653589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.653759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.653948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.654001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.654202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.654373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.654395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.654555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.654730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.654772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.654953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.655129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.655192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.655342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.655534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.655577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.655723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.655921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.655986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.656148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.656369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.656426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.656589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.656771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.656812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.656974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.657128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.657170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.657318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.657467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.657490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.657667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.657843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.657885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.658021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.658182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.658225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.662 [2024-04-16 12:49:53.658414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.658602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.662 [2024-04-16 12:49:53.658625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.662 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.658799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.658979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.659007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.659175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.659344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.659367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.659542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.659733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.659757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.659960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.660151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.660206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.660397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.660576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.660600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.660785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.660984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.661037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.661205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.661424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.661475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.661677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.661833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.661875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.662018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.662215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.662277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.662463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.662603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.662628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.662801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.662974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.663018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.663152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.663317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.663354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.663545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.663747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.663789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.663937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.664103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.664146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.664311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.664489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.664511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.664718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.664945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.664998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.665178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.665361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.665383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.665524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.665715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.665759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.665945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.666135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.666198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.666385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.666541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.666583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.666792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.666968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.667031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.667214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.667357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.667394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.667551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.667732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.667775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.667942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.668163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.668214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.663 qpair failed and we were unable to recover it. 00:21:54.663 [2024-04-16 12:49:53.668389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.663 [2024-04-16 12:49:53.668559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.668589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.668730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.668905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.668947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.669091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.669299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.669339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.669504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.669681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.669718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.669915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.670122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.670174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.670375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.670552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.670580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.670757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.670901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.670943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.671124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.671316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.671379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.671512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.671694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.671717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.671916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.672106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.672161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.672333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.672501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.672538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.672724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.672969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.673022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.673205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.673388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.673411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.673591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.673765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.673807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.673957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.674174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.674224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.674383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.674527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.674550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.674728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.674930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.674971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.675129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.675305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.675347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.675509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.675662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.675692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.675873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.676046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.676113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.676278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.676430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.676453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.676591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.676760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.676803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.676942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.677120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.677162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.677289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.677466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.677502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.677689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.677869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.677897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.678072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.678256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.678298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.678447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.678623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.678666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.678808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.678950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.678992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.679140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.679330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.679353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.664 qpair failed and we were unable to recover it. 00:21:54.664 [2024-04-16 12:49:53.679540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.664 [2024-04-16 12:49:53.679702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.679750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.679926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.680133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.680192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.680324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.680474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.680497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.680678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.680834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.680876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.681072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.681264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.681318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.681507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.681676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.681719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.681902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.682091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.682148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.682307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.682521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.682544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.682752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.682932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.682972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.683132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.683312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.683352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.683536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.683735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.683781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.683981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.684153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.684221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.684399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.684551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.684579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.684751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.684937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.684997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.685170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.685371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.685433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.685595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.685751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.685794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.685941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.686118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.686146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.686333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.686485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.686508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.686672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.686855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.686884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.687085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.687281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.687337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.687512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.687693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.687740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.687922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.688128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.688180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.688370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.688549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.688576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.688754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.688908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.688949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.689100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.689294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.689352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.689540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.689732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.689755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.689925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.690086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.690123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.690325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.690496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.690518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.665 [2024-04-16 12:49:53.690693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.690876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.665 [2024-04-16 12:49:53.690904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.665 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.691107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.691325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.691375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.691520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.691674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.691721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.691925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.692119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.692173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.692332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.692493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.692530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.692685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.692855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.692883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.693137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.693322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.693383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.693576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.693764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.693805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.694026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.694233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.694283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.694423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.694575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.694605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.694820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.694997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.695064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.695281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.695439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.695461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.695716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.695903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.695958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.696124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.696300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.696336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.696502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.696668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.696710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.696878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.697051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.697099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.697265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.697423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.697459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.697626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.697819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.697850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.698018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.698162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.698203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.698373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.698619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.698643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.698827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.699043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.699094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.699262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.699500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.699522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.699703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.699977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.700027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.700174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.700431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.700459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.700660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.700827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.700855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.701030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.701221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.701257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.701438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.701581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.701605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.701787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.701976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.702030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.702247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.702411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.702433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.666 qpair failed and we were unable to recover it. 00:21:54.666 [2024-04-16 12:49:53.702616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.702752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.666 [2024-04-16 12:49:53.702781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.702956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.703196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.703247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.703434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.703541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.703592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.703774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.703975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.704025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.704364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.704572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.704596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.704734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.704960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.705022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.705196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.705396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.705451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.705617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.705825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.705882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.706054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.706192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.706233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.706381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.706622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.706645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.706870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.707058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.707114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.707310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.707480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.707502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.707659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.707842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.707883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.708070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.708283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.708334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.708519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.708701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.708744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.708878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.709110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.709133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.709402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.709572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.709607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.709791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.709932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.709975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.710136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.710319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.710343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.710513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.710658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.710688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.710895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.711089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.711141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.711312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.711475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.711512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.711682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.711844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.711883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.712053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.712279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.712323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.712495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.712654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.712679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.712831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.712997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.713048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.713270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.713467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.713492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.713641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.713812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.713836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.713967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.714163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.714204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.667 qpair failed and we were unable to recover it. 00:21:54.667 [2024-04-16 12:49:53.714394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.667 [2024-04-16 12:49:53.714568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.668 [2024-04-16 12:49:53.714592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.668 qpair failed and we were unable to recover it. 00:21:54.668 [2024-04-16 12:49:53.714757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.668 [2024-04-16 12:49:53.714974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.668 [2024-04-16 12:49:53.715024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.668 qpair failed and we were unable to recover it. 00:21:54.945 [2024-04-16 12:49:53.715280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.945 [2024-04-16 12:49:53.715462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.945 [2024-04-16 12:49:53.715484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.945 qpair failed and we were unable to recover it. 00:21:54.945 [2024-04-16 12:49:53.715732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.945 [2024-04-16 12:49:53.715925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.945 [2024-04-16 12:49:53.715983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.945 qpair failed and we were unable to recover it. 00:21:54.945 [2024-04-16 12:49:53.716123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.945 [2024-04-16 12:49:53.716346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.716371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.716601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.716801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.716825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.717014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.717149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.717191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.717377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.717538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.717580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.717834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.718004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.718058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.718284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.718429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.718452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.718622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.718807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.718849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.719006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.719282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.719329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.719521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.719685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.719711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.719921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.720073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.720132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.720320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.720522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.720544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.720751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.720919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.720948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.721120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.721276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.721317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.721438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.721606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.721631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.721854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.722058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.722116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.722297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.722463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.722499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.722686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.722887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.722927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.723065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.723237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.723279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.723446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.723661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.723702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.723995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.724194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.724245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.724387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.724600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.724635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.724926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.725118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.725169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.725365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.725534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.725556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.725798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.725989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.726043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.726182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.726434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.726489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.726634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.726839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.726882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.946 qpair failed and we were unable to recover it. 00:21:54.946 [2024-04-16 12:49:53.727079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.946 [2024-04-16 12:49:53.727222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.727262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.727407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.727548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.727575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.727888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.728109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.728157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.728391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.728537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.728559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.728774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.728947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.729007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.729202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.729341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.729381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.729537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.729771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.729795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.729956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.730096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.730137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.730334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.730507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.730529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.730729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.730944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.731000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.731146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.731319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.731347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.731497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.731713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.731755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.731908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.732099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.732138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.732290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.732507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.732529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.732716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.732883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.732925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.733065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.733304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.733327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.733496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.733635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.733677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.733856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.733995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.734023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.734230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.734426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.734449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.734573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.734727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.734770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.734976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.735182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.735224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.735393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.735537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.735580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.735772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.735959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.736017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.947 [2024-04-16 12:49:53.736225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.736408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.947 [2024-04-16 12:49:53.736431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.947 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.736654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.736847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.736911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.737145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.737313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.737360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.737536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.737751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.737792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.737946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.738222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.738247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.738487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.738658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.738683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.738874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.739171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.739221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.739398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.739648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.739672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.739824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.739972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.740000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.740233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.740405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.740428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.740606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.740755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.740795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.740948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.741147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.741199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.741402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.741559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.741591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.741779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.742017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.742067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.742250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.742428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.742450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.742689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.742864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.742905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.743056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.743304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.743353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.743532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.743721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.743762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.743943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.744115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.744154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.744293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.744444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.744467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.744657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.744774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.744816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.745079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.745225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.745261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.745462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.745668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.745695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.745859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.746088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.746138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.746349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.746518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.746539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.746719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.746890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.746933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.747085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.747366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.747406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.948 [2024-04-16 12:49:53.747589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.747769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.948 [2024-04-16 12:49:53.747811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.948 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.747927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.748130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.748170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.748379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.748487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.748510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.748847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.749045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.749094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.749289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.749459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.749481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.749626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.749881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.749937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.750114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.750302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.750362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.750590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.750771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.750813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.751003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.751219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.751270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.751404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.751524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.751566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.751783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.751998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.752047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.752210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.752370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.752393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.752551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.752749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.752792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.752951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.753180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.753239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.753421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.753549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.753596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.753803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.753994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.754051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.754223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.754383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.754424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.754613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.754787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.754828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.754936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.755194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.755245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.755498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.755642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.755671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.755827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.756038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.756088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.756281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.756475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.756497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.756732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.756900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.756928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.949 qpair failed and we were unable to recover it. 00:21:54.949 [2024-04-16 12:49:53.757095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.757338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.949 [2024-04-16 12:49:53.757394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.757676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.757837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.757883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.758080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.758271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.758324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.758534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.758691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.758734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.758868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.759135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.759185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.759402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.759601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.759655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.759824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.760086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.760133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.760312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.760464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.760501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.760650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.760807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.760851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.761105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.761350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.761397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.761595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.761753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.761796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.762003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.762210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.762260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.762406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.762576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.762599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.762764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.762957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.763025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.763333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.763500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.763522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.763687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.763954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.764002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.764191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.764389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.764442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.764599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.764825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.764881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.765046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.765269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.765320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.765467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.765599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.765624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.765760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.765929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.765971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.766200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.766376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.766399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.766523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.766704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.766733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.766954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.767174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.767225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.767412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.767555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.767583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.767744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.767940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.767998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.950 qpair failed and we were unable to recover it. 00:21:54.950 [2024-04-16 12:49:53.768214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.950 [2024-04-16 12:49:53.768426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.768474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.768694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.768887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.768941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.769153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.769345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.769400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.769580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.769735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.769777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.769905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.770108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.770164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.770324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.770550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.770590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.770749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.771017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.771067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.771273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.771476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.771498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.771653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.771768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.771810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.771959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.772123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.772164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.772312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.772606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.772629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.772935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.773117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.773167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.773390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.773597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.773620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.773798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.773996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.774047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.774228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.774413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.774436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.774600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.774753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.774794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.774959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.775127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.775194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.775380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.775515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.775538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.951 qpair failed and we were unable to recover it. 00:21:54.951 [2024-04-16 12:49:53.775710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.775976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.951 [2024-04-16 12:49:53.776024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.776234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.776423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.776446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.776595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.776763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.776805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.776985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.777240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.777290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.777511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.777664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.777687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.777858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.778139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.778190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.778340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.778525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.778561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.778728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.778921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.778982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.779153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.779371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.779424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.779587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.779817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.779845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.780061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.780254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.780312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.780472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.780721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.780765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.780966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.781177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.781227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.781371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.781514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.781536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.781725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.781964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.782016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.782166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.782306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.782328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.782505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.782616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.782645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.782821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.783054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.783109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.783297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.783458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.783494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.783666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.783840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.783882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.784078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.784282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.784335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.784523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.784665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.784695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.784869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.785057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.785114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.785341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.785546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.785589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.785768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.785947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.786031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.786236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.786413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.786435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.786598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.786742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.786785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.786920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.787094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.787135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.787337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.787504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.952 [2024-04-16 12:49:53.787526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.952 qpair failed and we were unable to recover it. 00:21:54.952 [2024-04-16 12:49:53.787699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.787875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.787917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.788056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.788194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.788237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.788391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.788523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.788546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.788730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.788974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.789023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.789349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.789514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.789537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.789803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.789982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.790034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.790234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.790380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.790416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.790615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.790789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.790832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.791079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.791236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.791286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.791443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.791589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.791628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.791799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.791947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.791984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.792142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.792361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.792383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.792616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.792805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.792847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.792981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.793239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.793290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.793453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.793586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.793610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.793807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.793997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.794057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.794230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.794405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.794453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.794646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.794818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.794860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.795038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.795241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.795292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.795466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.795701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.795744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.796001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.796180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.796231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.796414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.796556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.796597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.953 [2024-04-16 12:49:53.796789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.796986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.953 [2024-04-16 12:49:53.797038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.953 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.797208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.797332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.797355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.797538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.797708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.797752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.797878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.798050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.798079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.798347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.798545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.798586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.798786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.798966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.799029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.799214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.799408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.799465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.799666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.799884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.799949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.800120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.800342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.800393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.800601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.800757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.800780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.800959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.801147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.801204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.801360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.801491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.801514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.801666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.801840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.801869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.802077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.802281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.802334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.802484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.802695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.802738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.802900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.803078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.803120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.803299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.803436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.803459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.803628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.803755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.803797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.803989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.804118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.804141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.804394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.804535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.804557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.804763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.804931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.804959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.805190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.805365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.805387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.805616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.805753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.805795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.805999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.806190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.806244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.806398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.806553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.806595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.806826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.806996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.807049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.807261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.807420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.807442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.954 [2024-04-16 12:49:53.807623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.807764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.954 [2024-04-16 12:49:53.807793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.954 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.807971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.808140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.808182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.808374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.808551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.808587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.808761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.808926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.808968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.809107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.809340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.809385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.809659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.809814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.809856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.810059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.810248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.810300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.810455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.810597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.810620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.810823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.811068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.811118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.811344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.811501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.811523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.811717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.811936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.811987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.812242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.812458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.812484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.812642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.812773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.812815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.812975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.813159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.813218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.813365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.813582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.813607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.813823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.814028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.814077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.814221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.814493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.814515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.814754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.814927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.814966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.815238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.815470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.815493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.815680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.815809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.815852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.816046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.816240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.816293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.816450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.816631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.816665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.816836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.817028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.817083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.817215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.817353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.817376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.817532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.817702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.817739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.817858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.818034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.818070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.818198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.818373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.818396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.955 [2024-04-16 12:49:53.818603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.818780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.955 [2024-04-16 12:49:53.818819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.955 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.819050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.819197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.819247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.819447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.819625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.819662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.819796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.819961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.820005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.820151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.820304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.820345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.820493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.820641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.820671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.820859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.820989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.821032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.821253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.821405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.821427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.821628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.821867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.821909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.822087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.822300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.822350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.822532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.822710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.822753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.823076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.823279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.823323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.823510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.823706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.823738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.823910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.824161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.824213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.824460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.824606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.824631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.824801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.824948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.824993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.825216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.825351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.825389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.825588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.825760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.825783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.825946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.826222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.826273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.826409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.826548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.826578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.826766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.826936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.826959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.827101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.827322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.827386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.827525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.827713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.827757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.956 qpair failed and we were unable to recover it. 00:21:54.956 [2024-04-16 12:49:53.827930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.956 [2024-04-16 12:49:53.828095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.828136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.828274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.828441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.828465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.828658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.828822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.828866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.829013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.829164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.829188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.829359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.829586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.829610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.829785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.829970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.830029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.830162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.830296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.830325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.830497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.830747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.830789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.830942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.831129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.831187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.831321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.831505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.831530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.831691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.831866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.831913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.832090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.832240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.832268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.832451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.832576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.832602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.832796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.832915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.832961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.833156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.833318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.833342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.833512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.833691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.833716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.833881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.834073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.834145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.834316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.834517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.834540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.834694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.834850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.834878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.835047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.835255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.835316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.835485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.835629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.835673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.835803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.835980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.836018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.836193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.836370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.836394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.836576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.836726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.836751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.836947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.837153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.837204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.837348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.837514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.837539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.837702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.837905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.837947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.838090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.838232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.838273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.838423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.838599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.957 [2024-04-16 12:49:53.838624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.957 qpair failed and we were unable to recover it. 00:21:54.957 [2024-04-16 12:49:53.838790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.838939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.839025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.839158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.839334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.839358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.839532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.839721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.839750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.839915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.840110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.840175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.840369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.840545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.840574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.840774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.840927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.840972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.841123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.841300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.841329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.841462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.841598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.841639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.841782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.841948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.841991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.842139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.842285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.842309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.842487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.842653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.842683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.842857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.842996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.843025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.843199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.843396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.843420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.843553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.843736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.843777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.843928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.844105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.844129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.844283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.844446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.844469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.844649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.844837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.844876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.845016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.845194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.845235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.845390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.845535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.845559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.845738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.845868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.845896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.846037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.846225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.846277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.846437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.846608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.846634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.846801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.846948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.846972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.847150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.847310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.847334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.847508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.847659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.847688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.847857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.848003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.848046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.848187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.848355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.848379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.848533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.848690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.848733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.958 [2024-04-16 12:49:53.848940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.849119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.958 [2024-04-16 12:49:53.849170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.958 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.849325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.849488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.849512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.849693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.849831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.849861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.850036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.850190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.850231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.850357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.850478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.850517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.850688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.850833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.850858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.851028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.851150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.851174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.851364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.851505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.851530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.851703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.851858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.851900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.852055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.852231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.852276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.852408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.852574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.852600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.852776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.852945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.853014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.853187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.853346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.853385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.853543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.853708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.853750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.853922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.854072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.854100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.854283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.854453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.854492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.854634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.854785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.854828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.855017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.855192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.855259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.855394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.855538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.855569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.855714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.855876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.855905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.856062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.856221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.856272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.856437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.856627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.856653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.856817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.856952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.856976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.857119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.857278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.857317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.857479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.857662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.857692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.857840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.858033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.858095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.858270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.858426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.858463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.858622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.858755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.858798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.859023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.859176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.859225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.959 qpair failed and we were unable to recover it. 00:21:54.959 [2024-04-16 12:49:53.859394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.959 [2024-04-16 12:49:53.859536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.859560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.859750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.859916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.859945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.860079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.860240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.860264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.860418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.860590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.860633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.860785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.861019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.861076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.861249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.861398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.861421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.861598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.861761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.861803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.861983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.862198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.862247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.862399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.862514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.862538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.862710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.862977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.863026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.863220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.863406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.863429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.863599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.863797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.863840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.864006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.864198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.864252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.864396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.864570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.864595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.864757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.864940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.864968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.865114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.865295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.865369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.865496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.865647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.865689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.865860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.866015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.866044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.866195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.866369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.866393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.866531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.866712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.866741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.866914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.867100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.867123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.867297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.867485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.867508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.867654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.867838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.867883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.868058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.868254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.868301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.960 [2024-04-16 12:49:53.868470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.868577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.960 [2024-04-16 12:49:53.868602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.960 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.868735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.868912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.868955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.869187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.869354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.869382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.869545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.869723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.869752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.869950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.870144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.870198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.870382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.870540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.870569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.870795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.870946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.871032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.871187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.871386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.871410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.871581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.871735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.871760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.871923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.872069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.872097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.872244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.872410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.872449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.872648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.872812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.872853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.873021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.873190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.873217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.873350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.873516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.873540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.873708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.873886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.873914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.874116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.874283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.874342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.874481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.874625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.874651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.874818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.874975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.875013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.875187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.875352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.875391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.875570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.875762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.875805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.875977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.876163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.876222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.876401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.876559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.876590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.876754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.876970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.877028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.877205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.877401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.877424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.877589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.877750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.877792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.877976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.878114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.878156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.878286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.878451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.878489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.878606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.878740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.878768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.878967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.879153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.879212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.961 qpair failed and we were unable to recover it. 00:21:54.961 [2024-04-16 12:49:53.879344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.961 [2024-04-16 12:49:53.879455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.879479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.879635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.879784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.879809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.879997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.880132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.880192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.880386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.880507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.880531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.880714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.880881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.880920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.881108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.881242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.881280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.881403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.881529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.881553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.881694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.881832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.881857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.882043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.882208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.882233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.882384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.882528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.882552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.882727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.882858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.882899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.883062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.883254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.883309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.883492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.883618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.883644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.883813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.884012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.884056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.884228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.884412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.884436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.884614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.884788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.884828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.884957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.885126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.885154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.885324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.886181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.886209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.886382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.886561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.886590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.886757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.886948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.887005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.887152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.887292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.887335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.887490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.887660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.887690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.887861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.888017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.888066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.888225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.888375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.888414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.888569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.888725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.888767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.888888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.889050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.889074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.889264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.889423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.889447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.889594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.889739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.889781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.962 qpair failed and we were unable to recover it. 00:21:54.962 [2024-04-16 12:49:53.889942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.962 [2024-04-16 12:49:53.890111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.890139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.890278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.890439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.890463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.890627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.890778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.890803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.890943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.891076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.891100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.891285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.891435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.891458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.891608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.891767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.891807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.892012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.892158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.892231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.892385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.892551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.892586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.892726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.892901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.892929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.893097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.893238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.893281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.893426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.893604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.893653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.893831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.893983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.894023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.894213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.894373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.894397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.894599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.894723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.894766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.894962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.895168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.895219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.895362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.895512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.895536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.895719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.895901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.895964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.896120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.896335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.896400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.896573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.896726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.896755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.896918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.897071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.897111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.897280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.897446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.897484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.897664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.897900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.897962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.898166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.898360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.898383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.898557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.898683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.898707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.898904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.899044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.899083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.899220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.899384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.899408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.899537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.899716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.899758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.899904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.900077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.900101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.900231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.900410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.900448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.963 qpair failed and we were unable to recover it. 00:21:54.963 [2024-04-16 12:49:53.900612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.963 [2024-04-16 12:49:53.900749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.900791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.900978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.901135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.901185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.901325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.901466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.901490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.901650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.901773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.901817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.901972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.902135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.902179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.902334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.902452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.902476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.902652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.902821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.902847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.902997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.903171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.903195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.903335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.903481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.903506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.903665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.903824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.903853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.904019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.904237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.904296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.904476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.904633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.904661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.904827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.904989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.905013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.905159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.905280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.905305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.905427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.905576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.905601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.905739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.905887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.905926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.906048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.906188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.906212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.906333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.906454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.906478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.906615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.906761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.906786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.906948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.907077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.907118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.907238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.907369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.907394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.907585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.907706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.907748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.907888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.908044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.908085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.908230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.908364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.908388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.908525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.908724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.908753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.908919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.909104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.909127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.909263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.909408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.909433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.909560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.909743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.909783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.964 qpair failed and we were unable to recover it. 00:21:54.964 [2024-04-16 12:49:53.909912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.910081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.964 [2024-04-16 12:49:53.910106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.910230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.910424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.910448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.910580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.910732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.910774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.910945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.911165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.911224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.911354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.911521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.911546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.911724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.911932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.911992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.912172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.912302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.912327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.912494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.912672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.912702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.912862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.913033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.913078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.913229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.913368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.913392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.913586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.913718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.913760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.913932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.914080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.914122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.914297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.914432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.914455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.914594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.914722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.914764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.914907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.915101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.915153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.915320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.915460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.915483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.915638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.915781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.915810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.915990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.916152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.916203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.916358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.916525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.916569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.916730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.916858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.916899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.917011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.917157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.917180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.917310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.917449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.917471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.917667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.917802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.917825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.917996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.918142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.918166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.965 [2024-04-16 12:49:53.918279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.918419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.965 [2024-04-16 12:49:53.918443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.965 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.918593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.918762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.918803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.918958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.919143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.919204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.919343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.919489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.919512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.919660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.919833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.919862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.920041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.920235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.920287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.920444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.920612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.920642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.920832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.920992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.921034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.921206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.921378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.921400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.921535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.921704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.921746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.921929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.922117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.922175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.922288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.922427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.922450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.922628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.922777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.922801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.922978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.923222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.923268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.923420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.923585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.923626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.923786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.924035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.924097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.924261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.924447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.924481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.924643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.924785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.924813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.924982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.925188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.925243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.925408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.925550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.925578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.925746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.925935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.925994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.926186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.926366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.926389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.926555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.926738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.926780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.926935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.927089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.927131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.927284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.927470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.927493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.927674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.927870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.927917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.928116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.928301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.928371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.928522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.928658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.928700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.928856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.929048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.929105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.966 qpair failed and we were unable to recover it. 00:21:54.966 [2024-04-16 12:49:53.929282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.966 [2024-04-16 12:49:53.929474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.929496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.929662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.929872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.929926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.930106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.930302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.930358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.930510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.930719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.930762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.930921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.931184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.931236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.931416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.931532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.931554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.931695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.931884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.931929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.932076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.932370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.932417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.932598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.932797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.932840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.933010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.933196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.933254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.933439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.933603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.933627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.933768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.933969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.934015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.934172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.934393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.934415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.934578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.934743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.934784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.935000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.935227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.935289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.935457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.935630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.935668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.935820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.935991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.936047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.936184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.936308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.936331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.936501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.936688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.936711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.936891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.937082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.937135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.937293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.937433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.937456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.937621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.937788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.937833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.938191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.938362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.938385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.938538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.938716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.938759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.938944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.939104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.939161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.939361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.939529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.939552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.939726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.939871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.939913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.940117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.940317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.940368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.940559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.940763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.967 [2024-04-16 12:49:53.940806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.967 qpair failed and we were unable to recover it. 00:21:54.967 [2024-04-16 12:49:53.940950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.941179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.941218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.941406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.941550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.941600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.941736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.941909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.941937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.942120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.942302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.942359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.942517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.942727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.942769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.942919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.943098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.943140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.943261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.943467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.943504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.943644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.943827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.943856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.944021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.944181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.944210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.944415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.944529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.944570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.944782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.944976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.945029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.945203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.945421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.945443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.945624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.945887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.945938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.946161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.946348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.946405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.946578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.946727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.946756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.946934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.947121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.947177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.947377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.947613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.947637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.947909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.948090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.948141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.948302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.948472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.948508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.948696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.948846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.948887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.949057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.949286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.949340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.949507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.949733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.949775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.949964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.950171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.950221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.950381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.950526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.950550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.950792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.950966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.951027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.951223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.951400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.951423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.951634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.951766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.951794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.951967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.952190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.952234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.952412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.952569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.952592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.952729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.952873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.968 [2024-04-16 12:49:53.952914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.968 qpair failed and we were unable to recover it. 00:21:54.968 [2024-04-16 12:49:53.953068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.953362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.953412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.953618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.953838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.953888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.954108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.954312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.954365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.954524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.954662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.954691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.954883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.955088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.955137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.955297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.955503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.955525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.955703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.955936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.956003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.956185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.956330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.956367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.956634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.956891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.956943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.957137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.957350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.957399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.957583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.957728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.957751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.957942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.958224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.958275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.958539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.958749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.958772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.958967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.959173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.959224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.959408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.959657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.959680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.959845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.960080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.960102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.960275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.960420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.960456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.960604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.960755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.960798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.961039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.961188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.961241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.961493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.961667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.961692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.961860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.962064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.962116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.962338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.962508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.962531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.962707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.962905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.962945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.963092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.963230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.963270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.963403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.963573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.963613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.963804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.963966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.963989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.969 qpair failed and we were unable to recover it. 00:21:54.969 [2024-04-16 12:49:53.964174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.964335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.969 [2024-04-16 12:49:53.964382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.964571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.964739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.964764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.964912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.965104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.965143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.965343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.965507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.965529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.965700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.965884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.965912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.966112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.966280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.966336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.966491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.966658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.966700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.966873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.967107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.967159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.967364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.967499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.967535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.967704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.967880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.967949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.968107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.968331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.968391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.968645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.968782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.968824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.968958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.969199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.969222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.969385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.969553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.969604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.969752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.969934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.969993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.970173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.970382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.970436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.970626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.970779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.970819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.971074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.971286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.971334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.971511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.971672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.971714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.971854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.972036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.972098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.972274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.972434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.972470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.972663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.972798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.972838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.972966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.973116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.973153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.973312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.973452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.973475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.973673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.973816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.973845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.974016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.974231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.974278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.974412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.974575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.974599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.974745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.974878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.974920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.975061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.975177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.975200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.975361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.975478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.975501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.975628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.975770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.970 [2024-04-16 12:49:53.975812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.970 qpair failed and we were unable to recover it. 00:21:54.970 [2024-04-16 12:49:53.975955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.976088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.976133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.976281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.976420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.976444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.976593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.976754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.976797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.976954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.977137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.977194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.977347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.977485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.977508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.977684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.977829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.977858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.978055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.978234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.978285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.978466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.978626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.978651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.978785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.978930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.978967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.979118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.979281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.979304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.979459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.979625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.979649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.979784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.979917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.979940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.980110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.980277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.980300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.980442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.980609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.980648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.980812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.980975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.981011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.981145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.981306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.981329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.981470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.981615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.981639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.981803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.981966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.982002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.982160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.982272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.982296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.982431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.982573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.982597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.982736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.982878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.982901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.983058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.983227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.983250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.983379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.983516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.983539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.983716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.983850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.983892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.984060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.984246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.984305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.984437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.984574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.984598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.984755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.984922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.984980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.985164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.985359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.985381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.985519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.985697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.985740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.985890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.986064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.986092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.986267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.986440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.986477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.986617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.986732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.986760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.971 qpair failed and we were unable to recover it. 00:21:54.971 [2024-04-16 12:49:53.986914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.971 [2024-04-16 12:49:53.987031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.987055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.987215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.987342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.987365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.987572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.987720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.987744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.987886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.988031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.988068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.988242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.988434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.988456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.988645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.988787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.988828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.988993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.989255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.989306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.989464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.989632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.989657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.989800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.989943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.989965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.990155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.990297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.990324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.990441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.990582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.990620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.990774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.990988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.991022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.991254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.991455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.991479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.991657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.991814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.991841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.992094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.992284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.992335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.992542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.992750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.992787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.992970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.993168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.993228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.993414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.993533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.993579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.993717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.993959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.994012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.994185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.994332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.994372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.994573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.994713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.994740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.994883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.995050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.995100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.995291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.995484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.995511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.995658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.995809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.995837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:54.972 [2024-04-16 12:49:53.995993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.996145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.972 [2024-04-16 12:49:53.996170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:54.972 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.996294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.996459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.996485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.996660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.996820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.996848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.997028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.997219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.997276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.997442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.997616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.997642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.997769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.997935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.997981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.998156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.998299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.998338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.998502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.998678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.998721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.998870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.999065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.999121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.999290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.999436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.999475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.999639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.999788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:53.999836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:53.999984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.000187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.000213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.000399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.000523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.000548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.000716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.000868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.000911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.001055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.001206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.001249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.001443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.001588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.001615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.001771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.001984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.002036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.002234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.002405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.002431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.002608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.002784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.002827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.003007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.003155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.003242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.003398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.003543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.253 [2024-04-16 12:49:54.003588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.253 qpair failed and we were unable to recover it. 00:21:55.253 [2024-04-16 12:49:54.003762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.003975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.004031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.004221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.004411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.004435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.004611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.004788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.004831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.004981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.005191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.005245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.005400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.005519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.005545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.005703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.005864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.005888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.006086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.006302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.006358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.006486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.006636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.006662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.006809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.006983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.007026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.007176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.007343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.007367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.007507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.007656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.007685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.007838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.008062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.008125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.008284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.008408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.008432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.008617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.008760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.008804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.008952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.009134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.009159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.009349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.009515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.009539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.009727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.009911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.009939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.010111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.010283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.010325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.010474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.010623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.010653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.010820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.010986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.011029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.011179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.011377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.011400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.011525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.011675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.011717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.011885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.012037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.012060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.012220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.012381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.012417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.012574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.012718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.012761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.012899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.013103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.254 [2024-04-16 12:49:54.013158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.254 qpair failed and we were unable to recover it. 00:21:55.254 [2024-04-16 12:49:54.013316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.013426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.013449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.013624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.013752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.013777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.013958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.014097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.014121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.014299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.014465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.014488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.014621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.014778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.014820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.014968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.015134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.015177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.015335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.015483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.015506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.015692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.015872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.015900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.016100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.016300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.016323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.016532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.016703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.016755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.016904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.017099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.017151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.017385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.017529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.017552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.017705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.017868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.017922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.018077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.018329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.018378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.018529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.018743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.018785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.018929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.019118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.019174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.019320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.019498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.019535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.019683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.019912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.019969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.020137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.020357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.020405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.020586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.020767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.020810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.020956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.021090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.021132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.021248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.021393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.021416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.021549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.021695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.021737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.022017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.022188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.022210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.022427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.022629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.022653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.022881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.023091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.023139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.255 qpair failed and we were unable to recover it. 00:21:55.255 [2024-04-16 12:49:54.023318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.255 [2024-04-16 12:49:54.023547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.023597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.023723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.023895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.023945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.024103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.024275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.024314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.024538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.024719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.024763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.024910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.025081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.025127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.025276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.025445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.025468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.025616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.025786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.025827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.026027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.026165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.026207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.026373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.026512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.026535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.026678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.026860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.026889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.027162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.027399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.027449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.027621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.027846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.027875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.028034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.028300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.028351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.028511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.028727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.028770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.028963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.029151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.029205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.029362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.029595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.029641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.029815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.030030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.030084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.030248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.030475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.030502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.030674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.030848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.030889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.031130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.031280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.031332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.031535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.031736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.031780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.031987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.032164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.032214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.032411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.032597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.032620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.032760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.032931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.032959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.033116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.033307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.033357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.033655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.033837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.033876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.034033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.034290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.034340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.256 qpair failed and we were unable to recover it. 00:21:55.256 [2024-04-16 12:49:54.034504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.256 [2024-04-16 12:49:54.034718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.034753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.034898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.035095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.035146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.035314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.035444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.035466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.035662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.035799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.035822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.035962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.036078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.036102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.036235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.036348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.036371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.036529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.036691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.036715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.036862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.036998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.037034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.037207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.037371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.037406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.037574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.037731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.037772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.037905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.038140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.038189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.038399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.038557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.038592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.038752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.038998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.039054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.039266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.039461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.039483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.039671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.039828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.039871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.040047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.040221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.040285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.040405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.040599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.040624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.040783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.040947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.040988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.041224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.041397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.041419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.041627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.041798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.041839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.042017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.042206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.042229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.042402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.042559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.042607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.042752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.042994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.043043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.043293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.043482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.043507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.043695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.043884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.043945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.044139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.044324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.044355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.257 qpair failed and we were unable to recover it. 00:21:55.257 [2024-04-16 12:49:54.044558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.257 [2024-04-16 12:49:54.044704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.044746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.044908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.045121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.045165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.045322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.045463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.045486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.045703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.045861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.045903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.046079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.046278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.046335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.046492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.046685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.046729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.046912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.047107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.047165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.047335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.047552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.047594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.047751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.047974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.048027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.048206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.048399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.048422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.048568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.048766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.048812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.048993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.049120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.049161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.049304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.049504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.049540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.049761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.049914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.049998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.050242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.050431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.050453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.050631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.050803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.050845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.051080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.051225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.051277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.051450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.051638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.051668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.051827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.052024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.052072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.258 qpair failed and we were unable to recover it. 00:21:55.258 [2024-04-16 12:49:54.052261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.052385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.258 [2024-04-16 12:49:54.052407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.052629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.052773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.052800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.052971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.053101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.053185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.053383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.053566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.053589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.053726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.053874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.053912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.054142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.054297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.054347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.054500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.054723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.054767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.054906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.055155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.055208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.055419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.055617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.055662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.055801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.056057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.056099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.056266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.056405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.056428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.056619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.056871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.056921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.057077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.057293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.057343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.057577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.057725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.057767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.057947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.058155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.058203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.058403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.058602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.058626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.058802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.058978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.059006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.059170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.059312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.059354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.059493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.059672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.059715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.059864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.060062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.060102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.060235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.060486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.060508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.060706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.060871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.060899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.061165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.061349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.061372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.061602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.061796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.061838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.062022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.062160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.062200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.259 qpair failed and we were unable to recover it. 00:21:55.259 [2024-04-16 12:49:54.062359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.062583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.259 [2024-04-16 12:49:54.062606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.062740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.062898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.062942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.063158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.063359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.063411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.063601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.063744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.063784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.063937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.064177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.064219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.064359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.064533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.064575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.064764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.064952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.065011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.065227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.065425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.065447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.065634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.065820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.065861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.066091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.066241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.066291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.066489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.066623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.066652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.066858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.067066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.067119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.067271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.067432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.067454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.067690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.067884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.067937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.068109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.068278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.068318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.068447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.068573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.068596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.068762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.068960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.069014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.069168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.069321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.069343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.069516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.069686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.069732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.069965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.070105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.070153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.070324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.070478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.070514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.070729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.070935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.070980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.071112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.071288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.071318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.071489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.071639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.071668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.071850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.071985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.072026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.072208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.072371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.072406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.072591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.072726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.072769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.072930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.073119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.260 [2024-04-16 12:49:54.073161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.260 qpair failed and we were unable to recover it. 00:21:55.260 [2024-04-16 12:49:54.073331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.073549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.073575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.073725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.073901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.073943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.074107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.074260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.074301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.074451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.074620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.074663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.074799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.074979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.075002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.075200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.075357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.075394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.075575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.075732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.075774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.075933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.076130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.076170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.076339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.076526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.076549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.076755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.076897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.076940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.077100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.077292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.077333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.077479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.077628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.077657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.077808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.077970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.078006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.078162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.078306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.078331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.078456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.078623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.078648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.078823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.078959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.078983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.079151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.079298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.079321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.079501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.079614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.079643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.079807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.079982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.080011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.080266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.080443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.080465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.080633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.080794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.080836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.080982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.081094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.081135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.081292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.081490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.081514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.081711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.081875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.081903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.082054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.082229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.082257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.082408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.082554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.082604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.082735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.082932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.082974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.261 qpair failed and we were unable to recover it. 00:21:55.261 [2024-04-16 12:49:54.083114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.261 [2024-04-16 12:49:54.083286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.083332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.083484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.083614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.083644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.083836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.084025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.084079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.084211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.084470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.084507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.084674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.084844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.084873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.085060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.085250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.085304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.085470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.085620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.085649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.085813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.085954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.085979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.086174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.086376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.086399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.086552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.086732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.086774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.086928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.087129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.087181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.087340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.087497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.087521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.087696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.087864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.087892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.088064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.088243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.088272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.088431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.088570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.088612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.088751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.088928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.088956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.089122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.089310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.089347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.089500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.089671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.089714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.089869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.090044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.090106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.090237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.090407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.090444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.090605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.090749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.090791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.090965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.091173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.091225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.091360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.091542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.091571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.091760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.091953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.092011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.092182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.092355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.092378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.092532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.092695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.092723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.092898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.093082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.262 [2024-04-16 12:49:54.093142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.262 qpair failed and we were unable to recover it. 00:21:55.262 [2024-04-16 12:49:54.093275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.093407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.093430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.093575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.093720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.093743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.093907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.094026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.094049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.094190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.094337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.094360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.094524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.094725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.094749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.094926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.095117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.095158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.095315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.095492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.095531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.095710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.095875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.095915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.096052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.096195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.096237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.096405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.096575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.096599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.096742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.096890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.096932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.097062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.097239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.097262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.097392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.097518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.097541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.097712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.097884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.097922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.098075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.098216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.098244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.098412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.098616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.098640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.098835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.098990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.099033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.099204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.099387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.099409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.099609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.099793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.099833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.099993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.100205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.100250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.100411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.100532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.100555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.100742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.100931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.100994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.263 qpair failed and we were unable to recover it. 00:21:55.263 [2024-04-16 12:49:54.101170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.263 [2024-04-16 12:49:54.101358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.101381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.101521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.101665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.101707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.101893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.102091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.102150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.102320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.102466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.102489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.102669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.102846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.102886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.103070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.103271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.103324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.103462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.103629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.103653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.103822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.103984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.104008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.104178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.104300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.104323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.104476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.104613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.104637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.104804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.104967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.105007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.105190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.105334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.105371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.105514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.105686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.105728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.105900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.106090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.106162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.106305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.106447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.106471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.106653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.106796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.106824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.107014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.107185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.107257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.107409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.107550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.107577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.107707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.107881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.107906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.108099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.108228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.108269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.108411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.108600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.108642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.108792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.108969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.109032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.109192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.109389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.109428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.109569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.109712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.109758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.110026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.110675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.110702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.110878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.111065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.111109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.111271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.111407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.111431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.111621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.111764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.111792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.264 qpair failed and we were unable to recover it. 00:21:55.264 [2024-04-16 12:49:54.111945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.264 [2024-04-16 12:49:54.112114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.112155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.112287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.112472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.112496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.112759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.112951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.113013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.113153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.113316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.113356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.113516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.113660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.113689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.113833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.114014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.114061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.114191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.114312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.114337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.114485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.114663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.114705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.114858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.115037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.115075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.115219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.115406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.115447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.115595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.115738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.115783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.115982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.116187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.116232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.116369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.116530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.116555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.116705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.116872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.116905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.117100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.117258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.117300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.117454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.117646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.117676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.117853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.118001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.118025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.118164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.118313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.118337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.118527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.118696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.118723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.118841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.119043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.119084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.119227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.119401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.119440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.119613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.119799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.119847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.119997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.120172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.120212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.120396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.120544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.120573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.120739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.120930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.120963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.121160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.121298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.121326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.121516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.121683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.121711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.121906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.122058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.265 [2024-04-16 12:49:54.122143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.265 qpair failed and we were unable to recover it. 00:21:55.265 [2024-04-16 12:49:54.122295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.122486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.122509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.122702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.122841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.122884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.123074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.123267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.123300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.123492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.123646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.123692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.123848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.124028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.124063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.124224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.124404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.124443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.124620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.124760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.124802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.124948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.125100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.125142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.125312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.125461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.125485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.125619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.125780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.125823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.126025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.126211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.126278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.126397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.126554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.126585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.126754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.126936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.126970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.127126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.127313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.127355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.127497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.127651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.127694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.127852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.128011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.128038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.128214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.128343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.128367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.128519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.128698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.128742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.128872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.129029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.129052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.129209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.129367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.129391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.129589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.129723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.129768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.129891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.130031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.130055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.130195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.130304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.130328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.130503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.130665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.130692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.130868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.131004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.131028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.131200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.131342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.131379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.131523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.131703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.131747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.131911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.132114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.132160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.266 qpair failed and we were unable to recover it. 00:21:55.266 [2024-04-16 12:49:54.132305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.132476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.266 [2024-04-16 12:49:54.132499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.132666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.132804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.132833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.133012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.133178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.133206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.133365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.133499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.133522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.133689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.133840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.133883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.134006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.134164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.134188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.134355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.134513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.134535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.134696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.134878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.134901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.135072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.135276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.135335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.135513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.135722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.135767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.135923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.136095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.136135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.136308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.136465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.136489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.136634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.136767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.136795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.136978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.137140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.137169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.137304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.137470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.137492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.137639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.137784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.137809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.137967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.138130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.138159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.138315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.138469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.138493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.138643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.138800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.138846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.139019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.139188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.139230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.139360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.139505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.139527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.139703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.139878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.139919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.140060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.140207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.140249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.140362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.140480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.140503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.267 qpair failed and we were unable to recover it. 00:21:55.267 [2024-04-16 12:49:54.140642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.140792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.267 [2024-04-16 12:49:54.140817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.140996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.141115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.141137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.141295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.141463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.141485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.141640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.141780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.141823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.142024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.142213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.142270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.142428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.142607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.142650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.142827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.143039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.143094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.143214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.143357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.143381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.143512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.143703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.143745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.143921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.144110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.144165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.144331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.144476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.144498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.144637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.144782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.144825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.145000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.145209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.145264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.145385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.145500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.145522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.145717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.145920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.145969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.146148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.146297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.146319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.146466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.146587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.146612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.146763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.146955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.147008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.147124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.147284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.147307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.147429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.147561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.147593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.147743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.147909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.147951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.148096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.148263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.148287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.148432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.148592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.148632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.148803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.148976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.149017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.149168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.149329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.149365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.149507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.149671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.149713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.149855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.150045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.150075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.268 [2024-04-16 12:49:54.150259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.150420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.268 [2024-04-16 12:49:54.150443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.268 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.150659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.150932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.150990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.151113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.151279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.151301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.151465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.151643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.151687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.151827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.152019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.152047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.152216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.152383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.152407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.152584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.152774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.152817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.153011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.153196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.153254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.153428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.153575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.153618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.153769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.153978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.154028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.154197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.154391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.154413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.154610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.154752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.154799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.155000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.155160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.155201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.155319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.155450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.155472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.155633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.155802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.155844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.155978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.156145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.156167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.156315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.156477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.156513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.156740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.156893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.156934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.157152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.157345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.157367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.157523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.157718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.157770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.157914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.158204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.158254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.158422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.158609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.158635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.158784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.158971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.159026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.159249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.159415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.159437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.159636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.159809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.159850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.160075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.160272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.160295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.160454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.160609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.160633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.160861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.161043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.161097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.269 qpair failed and we were unable to recover it. 00:21:55.269 [2024-04-16 12:49:54.161271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.269 [2024-04-16 12:49:54.161454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.161476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.161667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.161935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.161989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.162169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.162347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.162369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.162541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.162768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.162810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.162990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.163245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.163291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.163467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.163703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.163745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.163995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.164157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.164207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.164405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.164522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.164544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.164767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.164951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.165017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.165194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.165342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.165369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.165561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.165718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.165758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.166027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.166208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.166260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.166477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.166636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.166661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.166844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.167098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.167152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.167348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.167528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.167550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.167703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.167844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.167873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.168041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.168308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.168360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.168506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.168686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.168709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.168868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.169097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.169147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.169279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.169457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.169479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.169729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.169957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.170007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.170215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.170367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.170392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.170523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.170707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.170750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.170909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.171075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.171104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.171259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.171494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.171516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.171730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.171893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.171935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.172116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.172327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.172382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.172549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.172776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.172818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.270 [2024-04-16 12:49:54.172986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.173259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.270 [2024-04-16 12:49:54.173310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.270 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.173484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.173659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.173683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.173800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.173970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.174010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.174217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.174348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.174387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.174613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.174770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.174811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.174999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.175178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.175240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.175436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.175634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.175676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.175834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.175968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.176009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.176152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.176291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.176314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.176485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.176649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.176673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.176831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.177015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.177043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.177226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.177434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.177471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.177692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.177899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.177972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.178111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.178282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.178322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.178552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.178701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.178743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.178920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.179126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.179181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.179307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.179441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.179464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.179642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.179819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.179856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.180055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.180207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.180250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.180414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.180555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.180597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.180742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.180889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.180918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.181099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.181254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.181297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.181421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.181619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.181643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.181818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.181994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.182023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.182261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.182428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.182451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.182642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.182779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.182821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.183040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.183243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.183291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.183490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.183615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.183644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.183810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.183982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.271 [2024-04-16 12:49:54.184025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.271 qpair failed and we were unable to recover it. 00:21:55.271 [2024-04-16 12:49:54.184250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.184419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.184442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.184617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.184823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.184877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.185080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.185256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.185331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.185453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.185614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.185643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.185890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.186095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.186149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.186373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.186516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.186538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.186734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.186909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.186959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.187107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.187245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.187287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.187456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.187580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.187603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.187730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.188004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.188052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.188302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.188450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.188478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.188719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.188910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.188962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.189106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.189342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.189364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.189508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.189730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.189773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.189934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.190083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.190126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.190363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.190612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.190635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.190811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.191016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.191068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.191254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.191434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.191460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.191664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.191809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.191852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.192029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.192276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.192323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.192653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.192861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.192933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.193154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.193291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.193333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.193492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.193686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.193715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.272 [2024-04-16 12:49:54.194006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.194226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.272 [2024-04-16 12:49:54.194277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.272 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.194431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.194625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.194649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.194835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.195025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.195079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.195270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.195448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.195470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.195667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.195903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.195957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.196334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.196561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.196594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.196758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.196978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.197026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.197232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.197416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.197439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.197631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.197786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.197828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.197970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.198140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.198183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.198430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.198617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.198641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.198835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.199027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.199083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.199347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.199557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.199586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.199724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.199931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.199979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.200174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.200374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.200426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.200631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.200820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.200873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.201112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.201291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.201353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.201654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.201871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.201936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.202115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.202276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.202360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.202510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.202745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.202788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.202977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.203169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.203227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.203400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.203607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.203630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.203791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.203949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.203989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.204204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.204413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.204435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.204558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.204802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.204844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.205021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.205286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.205337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.205540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.205733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.205756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.205898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.206034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.206076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.273 qpair failed and we were unable to recover it. 00:21:55.273 [2024-04-16 12:49:54.206283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.273 [2024-04-16 12:49:54.206454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.206475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.206692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.206883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.206939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.207128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.207261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.207289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.207521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.207682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.207725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.207888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.208135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.208184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.208314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.208448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.208471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.208589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.208757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.208799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.209006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.209180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.209244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.209439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.209573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.209611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.209758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.209943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.210003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.210136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.210281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.210303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.210477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.210716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.210740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.210962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.211130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.211183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.211316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.211460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.211483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7bb0000b90 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.211700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.211885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.211917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.212105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.212283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.212346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.212536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.212693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.212716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.212922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.213072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.213100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.213261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.213458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.213486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.213642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.213770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.213795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.213947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.214142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.214199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.214384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.214589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.214628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.214788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.214945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.214973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.215150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.215305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.215332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.215461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.215590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.215631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.215797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.215979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.216029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.216201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.216385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.216418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.216581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.216853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.216875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.217035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.217265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.274 [2024-04-16 12:49:54.217322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.274 qpair failed and we were unable to recover it. 00:21:55.274 [2024-04-16 12:49:54.217490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.217689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.217713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.217898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.218157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.218207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.218364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.218479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.218506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.218690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.218888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.218916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.219066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.219279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.219306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.219487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.219643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.219671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.219901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.220116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.220163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.220293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.220482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.220510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.220693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.220863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.220890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.221081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.221244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.221306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.221495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.221626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.221650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.221873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.222063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.222111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.222328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.222467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.222494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.222655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.222805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.222829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.223005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.223267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.223316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.223549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.223724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.223759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.223989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.224216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.224268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.224404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.224623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.224662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.224936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.225136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.225185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.225383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.225581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.225610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.225787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.226007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.226055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.226203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.226367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.226404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.226552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.226721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.226750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.226896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.227077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.227105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.227236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.227387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.227415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.227649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.227791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.227813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.228009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.228200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.228258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.228425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.228633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.275 [2024-04-16 12:49:54.228687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.275 qpair failed and we were unable to recover it. 00:21:55.275 [2024-04-16 12:49:54.228823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.229051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.229102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.229311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.229498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.229526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.229738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.229907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.229961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.230112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.230274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.230302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.230457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.230681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.230710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.230884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.231028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.231064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.231224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.231398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.231425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.231555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.231747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.231775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.231946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.232167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.232215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.232366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.232529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.232575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.232734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.232950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.233003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.233163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.233368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.233426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.233576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.233834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.233862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.234015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.234138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.234160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.234351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.234614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.234643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.234801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.234956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.234983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.235152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.235380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.235407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.235559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.235772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.235830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.236011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.236192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.236253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.236403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.236582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.236610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.236796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.237011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.237061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.237215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.237476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.237544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.237740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.237937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.237986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.238128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.238288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.238314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.238445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.238560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.238603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.238795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.238982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.239032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.239189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.239379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.239407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.239536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.239726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.239755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.239951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.240149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.276 [2024-04-16 12:49:54.240209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.276 qpair failed and we were unable to recover it. 00:21:55.276 [2024-04-16 12:49:54.240419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.240601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.240649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.240852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.241019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.241075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.241285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.241470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.241498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.241712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.241890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.241949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.242106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.242230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.242253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.242433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.242592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.242620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.242971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.243209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.243258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.243427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.243576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.243615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.243842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.243997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.244043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.244244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.244406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.244434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.244636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.244822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.244848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.245005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.245209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.245254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.245412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.245568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.245594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.245759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.245899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.245927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.246089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.246279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.246313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.246480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.246605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.246633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.246819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.246965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.246988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.247194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.247325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.247352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.247586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.247751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.247779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.248018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.248213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.248261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.277 qpair failed and we were unable to recover it. 00:21:55.277 [2024-04-16 12:49:54.248444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.277 [2024-04-16 12:49:54.248689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.248728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.248915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.249034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.249063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.249238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.249421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.249453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.249633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.249762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.249800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.249981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.250124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.250164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.250326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.250486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.250513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.250685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.250829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.250853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.251063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.251277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.251326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.251462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.251612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.251637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.251869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.252061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.252109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.252238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.252520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.252548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.252782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.252960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.253010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.253163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.253310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.253332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.253513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.253656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.253684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.253837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.254102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.254155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.254355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.254534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.254562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.254804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.254988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.255055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.255235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.255382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.255409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.255536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.255686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.255714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.255966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.256193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.256242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.256393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.256525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.256547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.256748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.256917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.256981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.257152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.257274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.257301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.257468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.257699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.257728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.257883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.258076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.258139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.258312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.258458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.258486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.258674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.258873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.258931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.278 [2024-04-16 12:49:54.259098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.259317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.278 [2024-04-16 12:49:54.259375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.278 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.259557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.259725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.259753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.259896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.260060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.260088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.260235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.260389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.260420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.260671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.260832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.260859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.261084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.261229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.261257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.261430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.261678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.261707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.261869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.262050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.262107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.262274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.262494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.262521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.262737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.262967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.263022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.263175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.263369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.263397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.263604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.263825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.263852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.264044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.264246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.264296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.264495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.264606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.264633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.264835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.265019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.265077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.265210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.265400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.265428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.265587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.265746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.265774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.265952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.266086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.266125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.266245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.266398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.266426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.266589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.266769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.266798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.266931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.267131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.267160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.267300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.267452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.267475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.267630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.267774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.267802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.267960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.268150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.268205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.268376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.268530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.268558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.268733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.268921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.269006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.279 [2024-04-16 12:49:54.269159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.269334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.279 [2024-04-16 12:49:54.269362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.279 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.269516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.269701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.269729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.269890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.270077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.270128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.270302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.270482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.270510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.270678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.270831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.270859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.271040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.271212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.271278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.271457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.271619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.271648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.271827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.271960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.271999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.272169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.272322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.272349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.272516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.272674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.272703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.272828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.272987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.273035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.273195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.273326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.273349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.273491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.273658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.273686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.273863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.274071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.274118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.274267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.274442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.274470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.274623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.274767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.274789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.274929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.275104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.275132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.275284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.275459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.275487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.275647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.275833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.275887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.276046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.276202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.276241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.276392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.276560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.276594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.276771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.277026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.277054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.277267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.277431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.277459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.277616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.277760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.277800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.277997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.278231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.278280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.278436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.278590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.278618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.278828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.279049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.279099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.279255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.279462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.279490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.280 qpair failed and we were unable to recover it. 00:21:55.280 [2024-04-16 12:49:54.279669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.280 [2024-04-16 12:49:54.279822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.279851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.280019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.280264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.280314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.280483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.280698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.280726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.280920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.281072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.281093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.281262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.281416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.281444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.281689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.281816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.281844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.282062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.282270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.282296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.282427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.282611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.282634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.282849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.283057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.283106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.283297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.283449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.283477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.283704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.283878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.283911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.284129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.284275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.284296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.284436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.284561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.284595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.284735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.284897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.284925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.285161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.285322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.285348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.285566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.285787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.285814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.286001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.286177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.286238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.286429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.286641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.286698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.286888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.287078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.287127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.287329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.287497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.287519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.287690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.287851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.287879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.288059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.288284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.288334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.288510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.288702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.288735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.288911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.289023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.289046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.289209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.289456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.289483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.289684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.289823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.289851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.290049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.290286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.290335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.290523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.290679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.290702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.290863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.291032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.291059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.281 qpair failed and we were unable to recover it. 00:21:55.281 [2024-04-16 12:49:54.291209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.291376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.281 [2024-04-16 12:49:54.291404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.291551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.291733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.291761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.292001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.292183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.292232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.292387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.292607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.292636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.292837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.292997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.293058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.293235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.293419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.293458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.293624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.293759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.293782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.293927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.294071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.294099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.294259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.294413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.294441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.294719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.294946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.294974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.295144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.295271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.295293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.295466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.295638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.295666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.295919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.296099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.296146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.296365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.296538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.296577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.296714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.296886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.296909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.297068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.297220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.297248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.297474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.297663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.297691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.297844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.297976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.298004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.298246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.298414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.298441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.298591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.298751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.298780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.299002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.299201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.299250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.299389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.299532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.299560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.299785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.299922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.282 [2024-04-16 12:49:54.300006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.282 qpair failed and we were unable to recover it. 00:21:55.282 [2024-04-16 12:49:54.300156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.300336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.300364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.300517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.300678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.300706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.300917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.301138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.301188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.301348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.301577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.301606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.301792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.301975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.302034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.302195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.302357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.302385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.302588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.302721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.302750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.302937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.303087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.303129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.303249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.303395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.303423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.303586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.303775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.303808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.304022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.304225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.304268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.304451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.304612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.304637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.304802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.304954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.304982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.305149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.305297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.305325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.305478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.305629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.305658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.305822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.306007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.283 [2024-04-16 12:49:54.306031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.283 qpair failed and we were unable to recover it. 00:21:55.283 [2024-04-16 12:49:54.306191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.306373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.306397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.306630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.306861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.306890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.307022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.307160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.307212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.307491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.307746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.307779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.307908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.308146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.308208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.308415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.308590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.308643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.308796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.309041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.309093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.309238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.309402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.309440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.309599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.309736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.309772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.310011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.310181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.310232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.310421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.310600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.310628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.310791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.310937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.310974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.311148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.311292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.311320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.311494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.311644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.311672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.311817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.311989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.312049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.312225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.312410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.312437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.312639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.312883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.561 [2024-04-16 12:49:54.312942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.561 qpair failed and we were unable to recover it. 00:21:55.561 [2024-04-16 12:49:54.313123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.313274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.313302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.313447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.313600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.313629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.313812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.313997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.314065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.314228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.314440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.314468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.314622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.314829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.314889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.315063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.315259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.315308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.315459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.315612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.315635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.315818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.315952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.315980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.316158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.316310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.316338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.316496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.316629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.316657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.316888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.317060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.317110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.317324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.317449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.317477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.317655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.317809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.317838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.318030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.318208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.318263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.318434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.318628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.318666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.318823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.318983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.319011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.319263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.319422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.319450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.319607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.319771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.319800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.319986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.320127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.320167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.320341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.320540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.320573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.320787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.320935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.320986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.321186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.321358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.321396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.321575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.321719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.321760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.321904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.322081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.322109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.322289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.322419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.322447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.562 qpair failed and we were unable to recover it. 00:21:55.562 [2024-04-16 12:49:54.322595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.322737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.562 [2024-04-16 12:49:54.322765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.322927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.323072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.323110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.323263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.323435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.323467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.323728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.323914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.323964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.324130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.324332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.324360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.324539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.324724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.324752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.324906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.325057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.325085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.325208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.325376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.325404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.325538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.325709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.325737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.325872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.326050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.326074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.326210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.326369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.326396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.326548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.326756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.326785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.326941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.327093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.327158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.327328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.327480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.327504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.327673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.327799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.327827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.328002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.328151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.328179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.328334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.328485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.328512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.328708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.328825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.328850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.329068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.329324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.329352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.329637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.329879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.329912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.330173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.330390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.330437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.330637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.330856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.330883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.331090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.331309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.331355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.331597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.331792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.331821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.332039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.332286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.332331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.332546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.332717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.332757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.333008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.333208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.333253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.563 [2024-04-16 12:49:54.333506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.333729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.563 [2024-04-16 12:49:54.333758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.563 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.333915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.334140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.334190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.334380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.334514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.334554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.334835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.335061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.335113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.335325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.335446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.335475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.335794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.336011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.336061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.336316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.336633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.336662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.336919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.337155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.337213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.337476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.337690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.337719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.337923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.338193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.338244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.338541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.338764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.338792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.339053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.339298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.339327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.339532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.339746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.339774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.339980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.340189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.340242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.340522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.340747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.340777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.341001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.341181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.341233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.341469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.341714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.341743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.341994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.342240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.342288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.342456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.342625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.342650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.342822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.343087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.343141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.343384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.343615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.343656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.343886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.344113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.344179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.344412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.344678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.344707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.344941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.345139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.564 [2024-04-16 12:49:54.345190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.564 qpair failed and we were unable to recover it. 00:21:55.564 [2024-04-16 12:49:54.345409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.345632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.345661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.345857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.346077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.346128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.346360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.346630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.346659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.346931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.347203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.347255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.347487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.347669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.347698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.347980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.348240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.348288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.348476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.348729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.348773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.349007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.349239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.349288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.349542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.349806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.349835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.350078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.350350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.350401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.350678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.350918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.350946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.351197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.351418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.351463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.351758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.351983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.352039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.352286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.352531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.352559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.352711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.352973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.353019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.353287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.353502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.353530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.353701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.353901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.353941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.354161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.354468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.354533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.354808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.355096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.355153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.355344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.355494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.355522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.565 qpair failed and we were unable to recover it. 00:21:55.565 [2024-04-16 12:49:54.355794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.356026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.565 [2024-04-16 12:49:54.356076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.356376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.356645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.356675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.356898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.357157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.357207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.357497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.357714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.357743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.357996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.358209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.358258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.358539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.358807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.358836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.359076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.359313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.359369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.359576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.359758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.359786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.360000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.360244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.360293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.360514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.360776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.360806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.361095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.361358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.361407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.361653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.361880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.361944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.362196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.362328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.362379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.362536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.362728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.362756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.362996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.363217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.363265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.363524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.363771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.363799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.566 qpair failed and we were unable to recover it. 00:21:55.566 [2024-04-16 12:49:54.363951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.364079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.566 [2024-04-16 12:49:54.364106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.364308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.364498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.364526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.364740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.364919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.364971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.365177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.365367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.365415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.365593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.365718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.365747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.365952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.366182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.366231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.366460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.366634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.366663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.366868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.367058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.367108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.367289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.367456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.367484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.367677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.367848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.367876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.368040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.368245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.368305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.368518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.368691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.368719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.368897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.369095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.369142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.369308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.369515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.369543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.369733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.369916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.369981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.370151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.370375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.370430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.370646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.370783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.370810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.370994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.371180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.371228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.371412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.371598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.371640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.371806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.371963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.371990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.372165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.372339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.372367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.372544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.372720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.372748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.372948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.373132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.373193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.373378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.373534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.373568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.567 [2024-04-16 12:49:54.373710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.373864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.567 [2024-04-16 12:49:54.373892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.567 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.374062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.374250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.374301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.374487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.374670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.374699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.374855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.375060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.375125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.375272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.375484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.375511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.375689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.375881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.375909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.376094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.376348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.376377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.376629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.376762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.376799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.376957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.377162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.377213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.377363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.377522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.377550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.377732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.377989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.378017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.378179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.378346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.378374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.378538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.378743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.378784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.378997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.379205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.379255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.379501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.379668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.379695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.379884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.380067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.380090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.380269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.380441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.380465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.380619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.380748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.380774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.380928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.381051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.381076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.381217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.381337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.381365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.381489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.381677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.381702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.381820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.382019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.382044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.382197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.382352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.382380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.382509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.382642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.382668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.382798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.382930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.382958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.383081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.383204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.383232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.383388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.383558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.383593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.383740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.383866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.383894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.384062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.384203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.568 [2024-04-16 12:49:54.384231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.568 qpair failed and we were unable to recover it. 00:21:55.568 [2024-04-16 12:49:54.384384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.384533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.384561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.384720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.384858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.384898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.385074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.385283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.385332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.385529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.385710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.385736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.385900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.386100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.386147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.386350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.386557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.386592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.386753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.386913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.386952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.387195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.387424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.387469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.387673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.387864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.387891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.388123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.388357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.388402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.388641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.388828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.388868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.389079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.389314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.389359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.389546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.389741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.389767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.389932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.390121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.390146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.390366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.390550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.390607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.390772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.391000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.391047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.391306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.391543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.391577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.391757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.391904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.391962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.392201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.392342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.392375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.392629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.392767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.392793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.392948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.393168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.393214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.393454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.393706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.393732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.393898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.394079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.394118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.394368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.394528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.394553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.394702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.394841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.394881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.395092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.395240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.395278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.395484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.395626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.395652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.395785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.396000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.396039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.569 [2024-04-16 12:49:54.396287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.396449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.569 [2024-04-16 12:49:54.396488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.569 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.396637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.396776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.396801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.396970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.397147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.397171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.397445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.397624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.397650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.397807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.398008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.398033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.398240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.398453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.398482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.398741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.398875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.398921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.399163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.399433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.399461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.399643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.399822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.399848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.400091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.400250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.400274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.400450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.400607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.400633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.400832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.400997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.401022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.401195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.401368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.401392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.401589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.401756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.401781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.401919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.402099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.402123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.402276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.402446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.402471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.402653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.402788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.402814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.403014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.403209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.403234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.403462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.403656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.403682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.403866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.404068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.404112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.404333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.404527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.404555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.404758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.404887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.404911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.405127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.405295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.405329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.405507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.405677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.405703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.405862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.406059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.406087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.406256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.406403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.406431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.406623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.406791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.406816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.406989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.407140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.407164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.407346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.407541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.570 [2024-04-16 12:49:54.407586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.570 qpair failed and we were unable to recover it. 00:21:55.570 [2024-04-16 12:49:54.407722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.407937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.407969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.408180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.408346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.408380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.408643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.408810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.408835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.409026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.409244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.409289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.409475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.409700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.409726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.409886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.410075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.410099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.410349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.410489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.410514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.410734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.410911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.410950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.411096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.411348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.411371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.411578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.411754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.411780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.411936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.412159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.412184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.412418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.412627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.412653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.412832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.413014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.413038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.413231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.413466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.413490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.413665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.413804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.413829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.414056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.414198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.414239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.414423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.414624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.414650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.414809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.415037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.415082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.415284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.415461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.415501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.415722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.415960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.416014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.416258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.416428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.416458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.416664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.416801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.416827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.571 qpair failed and we were unable to recover it. 00:21:55.571 [2024-04-16 12:49:54.417036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.571 [2024-04-16 12:49:54.417251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.417275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.417576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.417797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.417822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.418028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.418245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.418269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.418446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.418582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.418608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.418877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.419051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.419076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.419231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.419393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.419417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.419644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.419832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.419857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.420084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.420212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.420244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.420455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.420678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.420706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.420929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.421197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.421243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.421489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.421710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.421735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.421901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.422135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.422196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.422445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.422614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.422643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.422812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.422995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.423040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.423266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.423425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.423465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.423681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.423911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.423935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.424137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.424385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.424410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.424660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.424873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.424897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.425136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.425302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.425336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.425535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.425766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.425792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.426022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.426233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.426257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.426484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.426621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.426648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.426784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.426963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.427005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.427184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.427425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.427451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.427655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.427843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.427883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.428006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.428242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.428267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.428495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.428637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.428663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.428837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.428997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.429022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.572 [2024-04-16 12:49:54.429241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.429382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.572 [2024-04-16 12:49:54.429405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.572 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.429608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.429773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.429798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.430000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.430211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.430237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.430421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.430628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.430654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.430850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.431070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.431115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.431339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.431544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.431572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.431803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.431967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.432012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.432206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.432357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.432382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.432575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.432713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.432751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.433003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.433221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.433256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.433528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.433744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.433773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.433922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.434185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.434211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.434390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.434632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.434659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.434920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.435174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.435198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.435327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.435462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.435486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.435687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.435880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.435921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.436156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.436317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.436361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.436579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.436766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.436792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.437055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.437187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.437211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.437432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.437675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.437701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.437963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.438187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.438233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.438469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.438662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.438689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.438959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.439145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.439174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.439401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.439615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.439641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.439817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.439980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.440034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.440289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.440506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.440535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.440695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.440900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.440948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.441170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.441375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.441414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.441646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.441809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.441847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.573 qpair failed and we were unable to recover it. 00:21:55.573 [2024-04-16 12:49:54.442059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.442273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.573 [2024-04-16 12:49:54.442297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.442581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.442798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.442828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.443093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.443302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.443331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.443593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.443746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.443776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.444014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.444229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.444277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.444548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.444753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.444781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.445010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.445235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.445281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.445476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.445713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.445739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.445964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.446154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.446176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.446345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.446542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.446588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.446805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.447022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.447045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.447253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.447465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.447489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.447744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.447992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.448015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.448255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.448495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.448519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.448711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.448887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.448912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.449067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.449232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.449269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.449478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.449631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.449658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.449835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.450019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.450042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.450173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.450308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.450332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.450503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.450674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.450700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.450872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.451090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.451119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.451360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.451618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.451646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.451824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.452074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.452126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.452377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.452600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.452637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.452786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.452996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.453041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.453340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.453589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.453616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.453767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.453957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.453985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.454222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.454442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.454467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.454672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.454823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.454871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.455070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.455290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.574 [2024-04-16 12:49:54.455335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.574 qpair failed and we were unable to recover it. 00:21:55.574 [2024-04-16 12:49:54.455631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.455791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.455819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.456011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.456178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.456202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.456426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.456654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.456680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.456869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.457090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.457114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.457352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.457575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.457601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.457778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.457945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.457994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.458283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.458517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.458574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.458753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.458914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.458954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.459203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.459477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.459524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.459747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.459952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.459975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.460215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.460470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.460506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.460752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.460892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.460917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.461140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.461451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.461497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.461715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.461882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.461907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.462143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.462461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.462492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.462682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.462840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.462879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.463095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.463393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.463441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.463722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.463890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.463916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.464228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.464528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.464573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.464751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.464963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.464991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.465190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.465434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.465463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.465724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.465873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.465911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.466090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.466285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.466318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.466548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.466773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.466799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.467067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.467291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.467314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.467578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.467729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.467754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.467940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.468238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.468262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.468587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.468767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.468795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.468985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.469176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.469203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.469386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.469586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.469629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.469808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.469975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.470003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.575 qpair failed and we were unable to recover it. 00:21:55.575 [2024-04-16 12:49:54.470174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.470385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.575 [2024-04-16 12:49:54.470429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.470628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.470749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.470773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.470988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.471169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.471197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.471438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.471587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.471629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.471794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.472006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.472052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.472280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.472514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.472541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.472715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.472873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.472905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.473103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.473281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.473336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.473525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.473715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.473740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.473944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.474163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.474207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.474455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.474655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.474680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.474839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.475030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.475074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.475348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.475576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.475620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.475776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.475966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.476022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.476270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.476561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.476621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.476786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.477034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.477077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.477278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.477487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.477515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.477687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.477826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.477865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.478010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.478197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.478236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.478453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.478684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.478710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.478876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.479052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.479079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.479288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.479514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.479541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.479709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.479875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.479898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.480092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.480250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.480277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.480459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.480668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.480696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.576 [2024-04-16 12:49:54.480859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.481044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.576 [2024-04-16 12:49:54.481071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.576 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.481250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.481427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.481454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.481652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.481776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.481803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.482021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.482187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.482214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.482400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.482586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.482614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.482817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.483040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.483067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.483233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.483383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.483410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.483576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.483741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.483768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.483933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.484080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.484106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.484304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.484461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.484488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.484674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.484843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.484870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.485022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.485359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.485386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.485578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.485749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.485776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.485961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.486111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.486138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.486447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.486653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.486681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.486819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.487021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.487048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.487198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.487394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.487421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.487660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.487828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.487873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.488049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.488220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.488247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.488402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.488627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.488655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.488814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.488979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.489006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.489191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.489380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.489407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.489632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.489771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.489798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.490001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.490152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.490179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.490331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.490538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.490574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.490734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.490878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.490900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.491098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.491315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.491342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.491509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.491686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.491715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.491896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.492092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.492141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.492328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.492501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.492528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.577 qpair failed and we were unable to recover it. 00:21:55.577 [2024-04-16 12:49:54.492734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.492937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.577 [2024-04-16 12:49:54.492964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.493137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.493348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.493396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.493595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.493779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.493803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.493982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.494133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.494160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.494300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.494468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.494495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.494643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.494769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.494792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.494990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.495162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.495189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.495370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.495543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.495580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.495729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.495931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.495958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.496127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.496331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.496359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.496532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.496740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.496764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.496894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.497084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.497112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.497264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.497505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.497532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.497740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.497944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.497971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.498141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.498351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.498379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.498523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.498679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.498702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.498894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.499077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.499104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.499305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.499504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.499531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.499735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.499870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.499897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.500081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.500274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.500301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.500451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.500661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.500689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.500874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.501080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.501107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.501262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.501478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.501505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.501654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.501826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.501849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.502140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.502423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.502450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.502680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.502817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.502844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.502987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.503154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.503182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.503325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.503485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.503522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.503680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.503825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.503870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.504056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.504296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.504323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.504509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.504697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.504720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.504875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.505074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.578 [2024-04-16 12:49:54.505101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.578 qpair failed and we were unable to recover it. 00:21:55.578 [2024-04-16 12:49:54.505299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.505461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.505487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.505677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.505798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.505825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.506014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.506207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.506234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.506406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.506585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.506612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.506766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.506964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.506991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.507166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.507364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.507391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.507626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.507797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.507824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.508026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.508226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.508253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.508448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.508648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.508677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.508887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.509069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.509096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.509295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.509478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.509505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.509675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.509800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.509822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.510030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.510189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.510216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.510396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.510612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.510639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.510842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.511041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.511068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.511275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.511450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.511477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.511665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.511821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.511852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.512067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.512279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.512306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.512536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.512745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.512768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.512935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.513093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.513119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.513270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.513449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.513475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.513657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.513863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.513890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.514029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.514237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.514264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.514490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.514655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.514683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.514838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.514982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.515009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.515258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.515433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.515460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.515697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.515873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.515900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.516256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.516558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.516592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.516764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.516985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.517012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.517292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.517536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.517571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.579 qpair failed and we were unable to recover it. 00:21:55.579 [2024-04-16 12:49:54.517745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.579 [2024-04-16 12:49:54.517981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.518008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.518280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.518491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.518518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.518738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.518981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.519008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.519219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.519444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.519471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.519688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.519833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.519860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.520104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.520304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.520331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.520508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.520707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.520730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.520888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.521106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.521133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.521330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.521478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.521506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.521673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.521872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.521894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.522122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.522324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.522351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.522553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.522763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.522790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.523039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.523305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.523332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.523568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.523737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.523765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.523962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.524162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.524189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.524393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.524574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.524601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.524785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.524985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.525012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.525268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.525485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.525512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.525697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.525918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.525945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.526199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.526406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.526457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.526669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.526873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.526900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.527116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.527299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.527326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.527503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.527727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.527750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.527948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.528120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.528148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.528431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.528694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.528722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.528943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.529160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.529188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.580 qpair failed and we were unable to recover it. 00:21:55.580 [2024-04-16 12:49:54.529431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.580 [2024-04-16 12:49:54.529630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.529658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.529853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.530068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.530095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.530297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.530528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.530554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.530753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.531045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.531094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.531354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.531547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.531582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.531807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.531998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.532026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.532198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.532456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.532507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.532670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.532845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.532885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.533109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.533261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.533288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.533475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.533676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.533704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.533937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.534196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.534223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.534455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.534699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.534731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.534938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.535126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.535154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.535328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.535536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.535570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.535872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.536152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.536179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.536381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.536593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.536621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.536925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.537216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.537243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.537546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.537789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.537816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.538014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.538224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.538257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.538512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.538742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.538765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.538953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.539133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.539197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.539495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.539754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.539777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.539976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.540153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.540180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.540430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.540680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.540708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.540852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.541072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.541098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.541366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.541632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.541659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.541857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.542052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.542079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.542302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.542519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.542546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.542719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.542898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.542925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.543193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.543387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.543414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.543619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.543825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.543852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.543992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.544166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.544202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.544510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.544738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.544762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.544953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.545161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.581 [2024-04-16 12:49:54.545194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.581 qpair failed and we were unable to recover it. 00:21:55.581 [2024-04-16 12:49:54.545407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.545640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.545667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.545895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.546082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.546109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.546320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.546522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.546549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.546825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.547135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.547162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.547426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.547616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.547644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.547917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.548059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.548086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.548269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.548483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.548510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.548749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.548965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.548993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.549182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.549369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.549396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.549618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.549859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.549886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.550142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.550306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.550333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.550561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.550793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.550820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.551066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.551300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.551326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.551584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.551872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.551900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.552156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.552360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.552387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.552610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.552797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.552825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.553004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.553163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.553213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.553463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.553660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.553688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.553951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.554149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.554177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.554411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.554665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.554692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.554947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.555221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.555248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.555456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.555613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.555635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.555883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.556063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.556090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.556359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.556614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.556641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.556905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.557123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.557151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.557396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.557590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.557618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.557832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.558016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.558043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.558277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.558603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.558631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.558805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.559002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.559034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.559227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.559388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.559415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.559577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.559804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.559831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.560056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.560311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.560338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.560534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.560803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.560826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.561077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.561255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.561282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.582 qpair failed and we were unable to recover it. 00:21:55.582 [2024-04-16 12:49:54.561472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.582 [2024-04-16 12:49:54.561720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.561748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.562018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.562215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.562242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.562537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.562814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.562837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.563063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.563247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.563294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.563533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.563753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.563779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.564080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.564341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.564368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.564547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.564768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.564796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.565019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.565244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.565271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.565511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.565778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.565801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.565982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.566226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.566253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.566539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.566767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.566789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.566997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.567239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.567266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.567550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.567756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.567791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.568038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.568227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.568255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.568460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.568652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.568681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.568881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.569040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.569067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.569335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.569521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.569549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.569795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.569965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.569992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.570278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.570476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.570503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.570784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.571094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.571121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.571378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.571683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.571711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.572006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.572283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.572310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.572609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.572881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.572907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.573121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.573362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.573389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.573684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.573925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.573951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.574283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.574537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.574573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.574800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.575030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.575079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.575282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.575459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.575514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.575696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.575902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.575963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.576146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.576344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.576396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.576615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.576756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.576784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.576937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.577097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.577137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.577330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.577506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.577534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.577756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.577959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.577998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.578195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.578359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.578388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.578545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.578746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.578775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.583 [2024-04-16 12:49:54.579001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.579145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.583 [2024-04-16 12:49:54.579200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.583 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.579383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.579523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.579551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.579736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.579918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.579969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.580129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.580317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.580355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.580558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.580748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.580776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.580950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.581146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.581194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.581399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.581605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.581661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.581842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.582081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.582139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.582314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.582481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.582510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.582723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.582883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.582942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.583158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.583406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.583454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.583642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.583817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.583845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.584018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.584211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.584272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.584473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.584665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.584694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.584877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.585085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.585138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.585314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.585459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.585483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.585647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.585799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.585827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.586003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.586220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.586268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.586450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.586626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.586655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.586834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.587025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.587068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.587329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.587503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.587532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.587729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.587911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.587962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.588152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.588341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.588369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.588542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.588699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.588741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.588915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.589048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.589076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.589216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.589414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.589443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.589657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.589819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.589847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.590032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.590224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.590261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.590474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.590651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.590680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.590881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.591074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.591125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.591313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.591522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.591550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.591732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.591915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.584 [2024-04-16 12:49:54.591977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.584 qpair failed and we were unable to recover it. 00:21:55.584 [2024-04-16 12:49:54.592177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.592372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.592428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.592632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.592807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.592835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.593025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.593231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.593282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.593445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.593640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.593668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.593877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.594086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.594149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.594324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.594486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.594515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.594674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.594879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.594907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.595062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.595219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.595258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.595479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.595624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.595652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.595828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.596015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.596065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.596300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.596471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.596499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.596655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.596835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.596858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.597087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.597284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.597334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.597509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.597680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.597709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.597922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.598138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.598188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.598365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.598561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.598605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.598824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.598992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.599045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.599210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.599405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.599467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.599681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.599864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.599893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.600077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.600212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.600253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.600426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.600570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.600599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.600763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.600947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.601002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.601198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.601360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.601388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.601538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.601773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.601802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.601975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.602175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.602232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.602396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.602577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.602606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.602750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.602962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.603010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.603250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.603428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.603478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.603702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.603900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.603962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.604166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.604331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.604388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.604607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.604750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.604779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.604954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.605092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.605131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.605271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.605447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.605476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.605630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.605845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.605874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.606060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.606295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.606341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.606498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.606700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.606725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.606933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.607101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.585 [2024-04-16 12:49:54.607126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.585 qpair failed and we were unable to recover it. 00:21:55.585 [2024-04-16 12:49:54.607347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.607509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.607533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.586 qpair failed and we were unable to recover it. 00:21:55.586 [2024-04-16 12:49:54.607733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.607911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.607940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.586 qpair failed and we were unable to recover it. 00:21:55.586 [2024-04-16 12:49:54.608123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.608326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.608365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.586 qpair failed and we were unable to recover it. 00:21:55.586 [2024-04-16 12:49:54.608580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.608751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.608779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.586 qpair failed and we were unable to recover it. 00:21:55.586 [2024-04-16 12:49:54.608986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.609172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.609222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.586 qpair failed and we were unable to recover it. 00:21:55.586 [2024-04-16 12:49:54.609417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.609638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.586 [2024-04-16 12:49:54.609672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.586 qpair failed and we were unable to recover it. 00:21:55.586 [2024-04-16 12:49:54.609874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.610058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.610096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.610306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.610514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.610542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.610700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.610922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.610978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.611173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.611376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.611421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.611593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.611806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.611834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.612058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.612259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.612310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.612518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.612728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.612756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.612995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.613186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.613232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.613451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.613599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.613624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.613826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.613999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.614056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.614280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.614458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.614486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.614686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.614902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.614954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.615188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.615339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.615389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.615578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.615779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.615807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.615954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.616151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.616206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.616433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.616604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.616633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.616851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.617037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.617088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.617263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.617464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.617492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.617670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.617871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.617930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.618154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.618360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.618414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.618587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.618750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.618790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.618960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.619152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.619213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.619388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.619561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.619606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.619769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.619993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.620050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.620251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.620414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.620463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.620612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.620782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.620810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.621015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.621170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.621221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.852 qpair failed and we were unable to recover it. 00:21:55.852 [2024-04-16 12:49:54.621422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.852 [2024-04-16 12:49:54.621645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.621675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.621840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.622075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.622123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.622309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.622446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.622475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.622665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.622806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.622839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.622991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.623215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.623243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.623403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.623584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.623627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.623806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.624015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.624067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.624242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.624411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.624440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.624617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.624796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.624825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.625008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.625173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.625197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.625403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.625606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.625635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.625854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.626070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.626125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.626340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.626536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.626572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.626756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.626960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.627016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.627216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.627440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.627489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.627695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.627917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.627966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.628136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.628351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.628401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.628599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.628753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.628793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.628968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.629186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.629253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.629450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.629631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.629665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.629876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.630103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.630152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.630355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.630523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.630561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.630718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.630891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.630920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.631111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.631288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.631359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.631560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.631740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.631768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.631984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.632166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.632226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.632426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.632641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.632670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.632851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.633072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.633122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.633323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.633545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.853 [2024-04-16 12:49:54.633583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.853 qpair failed and we were unable to recover it. 00:21:55.853 [2024-04-16 12:49:54.633772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.633919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.633942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.634106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.634283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.634311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.634488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.634694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.634723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.634912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.635162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.635212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.635445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.635592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.635621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.635820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.636006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.636058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.636228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.636399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.636427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.636621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.636876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.636924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.637109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.637277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.637342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.637521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.637727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.637756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.637960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.638176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.638225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.638415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.638610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.638639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.638798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.638965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.639002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.639179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.639352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.639380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.639556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.639748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.639776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.639949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.640173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.640221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.640403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.640580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.640608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.640811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.640987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.641037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.641215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.641422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.641472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.641680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.641908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.641957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.642170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.642386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.642445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.642637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.642809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.642870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.643057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.643289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.643337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.643512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.643711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.643740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.643909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.644058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.644094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.644271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.644450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.644478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.644628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.644806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.644834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.645001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.645211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.645258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.854 [2024-04-16 12:49:54.645439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.645609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.854 [2024-04-16 12:49:54.645633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.854 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.645812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.645979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.646007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.646180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.646387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.646415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.646619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.646822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.646872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.647092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.647284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.647331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.647536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.647799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.647830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.647999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.648182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.648231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.648436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.648632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.648690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.648921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.649183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.649230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.649543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.649793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.649832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.650092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.650372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.650420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.650716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.651037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.651087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.651352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.651644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.651673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.651947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.652262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.652319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.652554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.652854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.652883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.653185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.653479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.653530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.653854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.654193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.654241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.654534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.654852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.654883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.655194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.655517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.655585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.655890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.656222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.656273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.656541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.656865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.656906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.657202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.657514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.657560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.657869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.658142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.658189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.658505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.658841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.658874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.659174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.659422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.659470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.659781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.660099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.660146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.660457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.660724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.660754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.661025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.661336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.661384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.661556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.661833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.661862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.855 qpair failed and we were unable to recover it. 00:21:55.855 [2024-04-16 12:49:54.662134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.855 [2024-04-16 12:49:54.662410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.662459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.662698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.663023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.663073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.663393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.663652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.663681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.663976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.664202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.664251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.664538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.664802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.664831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.665151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.665408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.665458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.665684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.665974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.666025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.666328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.666679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.666709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.666945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.667259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.667309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.667592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.667892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.667922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.668219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.668553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.668612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.668866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.669175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.669225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.669535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.669808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.669836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.670138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.670445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.670495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.670736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.670902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.670930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.671237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.671517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.671574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.671820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.671990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.672041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.672298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.672612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.672642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.672966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.673267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.673318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.673575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.673771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.673799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.674108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.674357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.674405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.674726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.674992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.675021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.675297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.675588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.675617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.856 qpair failed and we were unable to recover it. 00:21:55.856 [2024-04-16 12:49:54.675796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.856 [2024-04-16 12:49:54.676058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.676111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.676392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.676704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.676733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.677034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.677285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.677330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.677588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.677886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.677915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.678231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.678477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.678529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.678751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.679069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.679120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.679345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.679601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.679631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.679953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.680206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.680255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.680506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.680763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.680793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.681090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.681403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.681451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.681716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.681978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.682028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.682307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.682635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.682665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.682976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.683247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.683295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.683608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.683889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.683918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.684196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.684457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.684505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.684805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.685089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.685141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.685432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.685684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.685714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.685939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.686229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.686278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.686551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.686804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.686832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.687136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.687425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.687474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.687740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.688069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.688119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.688427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.688733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.688763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.689030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.689349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.689404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.689716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.690039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.690087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.690383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.690694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.690724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.691028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.691345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.691396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.691718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.692021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.692050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.692353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.692589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.692619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.692882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.693180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.693230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.857 qpair failed and we were unable to recover it. 00:21:55.857 [2024-04-16 12:49:54.693490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.857 [2024-04-16 12:49:54.693756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.693786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.694051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.694365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.694415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.694646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.694967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.694998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.695264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.695584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.695631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.695960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.696279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.696329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.696629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.696905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.696934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.697227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.697558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.697617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.697877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.698193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.698241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.698516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.698782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.698811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.699113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.699439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.699486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.699755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.700027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.700077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.700375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.700652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.700682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.700951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.701256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.701305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.701608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.701886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.701915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.702192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.702480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.702530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.702782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.703016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.703067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.703378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.703642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.703671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.703967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.704235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.704286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.704614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.704921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.704950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.705230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.705503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.705554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.705856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.706112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.706163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.706418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.706679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.706709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.707083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.707379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.707429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.707740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.708081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.708132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.708392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.708693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.708723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.709028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.709313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.709364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.709629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.709931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.709959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.710210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.710470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.710519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.858 [2024-04-16 12:49:54.710840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.711169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.858 [2024-04-16 12:49:54.711219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.858 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.711476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.711762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.711792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.712072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.712424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.712475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.712779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.713074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.713122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.713421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.713683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.713712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.714014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.714346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.714396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.714711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.715037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.715068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.715370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.715645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.715675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.715947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.716252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.716301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.716578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.716880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.716909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.717234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.717540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.717597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.717898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.718214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.718264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.718523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.718837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.718866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.719128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.719441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.719492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.719750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.720062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.720113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.720396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.720699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.720729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.721025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.721347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.721401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.721726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.722056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.722106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.722386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.722667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.722696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.722986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.723270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.723320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.723590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.723891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.723919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.724174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.724421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.724473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.724739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.725006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.725057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.725301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.725544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.725580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.725854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.726111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.726160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.726434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.726756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.726785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.727115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.727413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.727462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.727723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.727994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.728044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.728361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.728640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.728669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.859 [2024-04-16 12:49:54.728976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.729313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.859 [2024-04-16 12:49:54.729364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.859 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.729639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.729927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.729955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.730269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.730591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.730637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.730945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.731264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.731312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.731590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.731895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.731924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.732178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.732491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.732541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.732855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.733186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.733236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.733481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.733723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.733753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.734004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.734241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.734289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.734595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.734866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.734895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.735162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.735468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.735519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.735835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.736114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.736165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.736458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.736705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.736747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.737018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.737354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.737405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.737714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.738012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.738063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.738266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.738498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.738527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.738769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.739025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.739073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.739368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.739603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.739633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.739895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.740188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.740235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.740495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.740806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.740836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.741136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.741408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.741459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.741759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.742076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.742127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.742426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.742741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.742771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.743045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.743343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.743392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.743650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.743910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.743939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.744247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.744553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.744619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.744928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.745255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.745303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.745577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.745814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.745843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.746145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.746414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.746464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.860 qpair failed and we were unable to recover it. 00:21:55.860 [2024-04-16 12:49:54.746688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.860 [2024-04-16 12:49:54.746981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.747010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.747281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.747545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.747612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.747874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.748157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.748207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.748466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.748709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.748738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.749052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.749358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.749409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.749734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.750016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.750066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.750366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.750663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.750693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.750999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.751285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.751334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.751590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.751849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.751878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.752180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.752511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.752571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.752835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.753144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.753191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.753498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.753757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.753787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.754104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.754411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.754461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.754757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.755095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.755145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.755407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.755710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.755740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.756045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.756363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.756414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.756727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.757026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.757077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.757326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.757630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.757659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.757963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.758247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.758297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.758606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.758914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.758947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.759228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.759485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.759534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.759883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.760149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.760199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.760519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.760832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.760862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.761162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.761407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.761457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.861 [2024-04-16 12:49:54.761709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.761939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.861 [2024-04-16 12:49:54.761990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.861 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.762297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.762616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.762646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.762910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.763189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.763238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.763507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.763806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.763835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.764123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.764461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.764513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.764819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.765137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.765187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.765490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.765798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.765828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.766124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.766437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.766487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.766785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.767099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.767149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.767412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.767669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.767698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.768012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.768281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.768331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.768558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.768844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.768873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.769114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.769441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.769492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.769708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.769979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.770030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.770286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.770580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.770609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.770906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.771218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.771266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.771625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.771927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.771956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.772230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.772493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.772542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.772856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.773173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.773222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.773508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.773766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.773796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.774114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.774462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.774509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.774779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.775086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.775134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.775437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.775743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.775773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.776065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.776295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.776346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.776639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.776887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.776916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.777226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.777471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.777495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.777809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.778122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.778183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.778483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.778779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.778808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.779079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.779378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.779427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.862 qpair failed and we were unable to recover it. 00:21:55.862 [2024-04-16 12:49:54.779728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.862 [2024-04-16 12:49:54.780005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.780054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.780311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.780582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.780612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.780934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.781213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.781260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.781612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.781835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.781862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.782163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.782409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.782460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.782726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.782999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.783047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.783344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.783609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.783639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.783946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.784236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.784287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.784594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.784925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.784957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.785217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.785488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.785537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.785839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.786123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.786171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.786461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.786729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.786759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.787080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.787398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.787447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.787695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.787987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.788034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.788339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.788644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.788674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.788927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.789216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.789266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.789518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.789809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.789839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.790100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.790407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.790468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.790745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.791021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.791070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.791342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.791633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.791663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.791963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.792250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.792300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.792554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.792871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.792900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.793180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.793507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.793556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.793831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.794131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.794179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.794489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.794776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.794806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.795021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.795286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.795336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.795623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.795922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.795951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.796221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.796550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.796615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.796893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.797205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.797254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.863 qpair failed and we were unable to recover it. 00:21:55.863 [2024-04-16 12:49:54.797568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.863 [2024-04-16 12:49:54.797839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.797867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.798170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.798459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.798507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.798779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.799051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.799101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.799377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.799628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.799657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.799958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.800271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.800319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.800625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.800930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.800959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.801231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.801579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.801627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.801899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.802172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.802221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.802514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.802785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.802815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.803149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.803431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.803482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.803787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.804113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.804168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.804481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.804774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.804804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.805065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.805370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.805420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.805735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.806028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.806079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.806357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.806687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.806717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.806982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.807255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.807303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.807613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.807875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.807904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.808194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.808467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.808515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.808845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.809169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.809219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.809472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.809717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.809747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.810051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.810367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.810415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.810719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.811003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.811057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.811327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.811648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.811678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.811956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.812224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.812275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.812537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.812851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.812881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.813241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.813514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.813561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.813875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.814184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.814234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.814498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.814809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.814839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.815089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.815372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.864 [2024-04-16 12:49:54.815423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.864 qpair failed and we were unable to recover it. 00:21:55.864 [2024-04-16 12:49:54.815739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.816063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.816114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.816399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.816677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.816706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.816999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.817331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.817380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.817691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.817936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.817965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.818240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.818554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.818616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.818918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.819184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.819235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.819508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.819765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.819795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.820108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.820413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.820463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.820766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.821085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.821135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.821445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.821727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.821757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.822041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.822305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.822357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.822656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.822929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.822958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.823259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.823590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.823656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.823970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.824288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.824337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.824641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.825044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.825089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.825413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.825677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.825708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.825961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.826216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.826264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.826596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.826878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.826907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.827175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.827395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.827451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.827682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.827949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.827998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.828263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.828533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.828595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.828861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.829094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.829142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.829348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.829559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.829598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.829858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.830119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.830166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.865 qpair failed and we were unable to recover it. 00:21:55.865 [2024-04-16 12:49:54.830367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.865 [2024-04-16 12:49:54.830593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.830623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.830929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.831170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.831218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.831472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.831694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.831723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.832019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.832330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.832379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.832691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.832941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.832970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.833255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.833595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.833667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.833937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.834148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.834206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.834475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.834750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.834780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.835045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.835319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.835367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.835669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.835926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.835955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.836268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.836594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.836644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.836954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.837263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.837319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.837613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.837868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.837896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.838170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.838512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.838571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.838842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.839122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.839178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.839486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.839783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.839814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.840119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.840390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.840441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.840718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.841022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.841085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.841333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.841558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.841594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.841888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.842207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.842256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.842475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.842689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.842718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.842997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.843283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.843331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.843550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.843832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.843862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.844111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.844379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.844430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.844737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.845053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.845104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.845420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.845701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.845730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.846023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.846273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.846321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.846574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.846860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.846889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.847107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.847394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.847446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.866 qpair failed and we were unable to recover it. 00:21:55.866 [2024-04-16 12:49:54.847742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.866 [2024-04-16 12:49:54.848059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.848107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.848418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.848735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.848765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.849049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.849356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.849406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.849708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.850008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.850037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.850352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.850670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.850700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.850998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.851232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.851285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.851556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.851838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.851867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.852161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.852482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.852531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.852807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.853113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.853168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.853470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.853789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.853820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.854045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.854295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.854343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.854618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.854932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.854962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.855234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.855518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.855575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.855823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.856147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.856205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.856419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.856667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.856697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.856912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.857181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.857229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.857551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.857933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.857991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.858265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.858557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.858625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.858845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.859115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.859164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.859462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.859761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.859792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.860104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.860442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.860493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.860775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.861091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.861142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.861448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.861720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.861749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.861996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.862262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.862316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.862625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.862949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.862980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.863281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.863574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.863614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.863922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.864195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.864246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.864505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.864736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.864766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.865047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.865333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.867 [2024-04-16 12:49:54.865386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.867 qpair failed and we were unable to recover it. 00:21:55.867 [2024-04-16 12:49:54.865646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.865907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.865936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.866197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.866469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.866521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.866940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.867295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.867351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.867627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.867892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.867922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.868170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.868473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.868521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.868781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.869092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.869142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.869439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.869684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.869714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.870020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.870364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.870414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.870708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.870994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.871044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.871296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.871550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.871588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.871807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.872078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.872143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.872458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.872696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.872726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.873006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.873270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.873322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.873523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.873777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.873807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.874108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.874422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.874472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.874784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.875052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.875104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.875354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.875620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.875650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.875909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.876144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.876195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.876455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.876728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.876758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.877070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.877363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.877413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.877691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.877970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.878021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.878292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.878597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.878627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.878896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.879136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.879187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.879495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.879790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.879820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.880064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.880303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.880353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.880614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.880887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.880915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.881186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.881497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.881547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.881852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.882111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.882160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.868 qpair failed and we were unable to recover it. 00:21:55.868 [2024-04-16 12:49:54.882438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.882670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.868 [2024-04-16 12:49:54.882699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.882900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.883179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.883229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.883498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.883800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.883837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.884076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.884338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.884385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.884665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.884882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.884909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.885194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.885507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.885561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.885878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.886184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.886234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.886504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.886795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.886824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.887130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.887455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.887504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.887797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.888011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.888063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.888299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.888547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.888584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.888847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.889154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.889204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.889460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.889720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.889754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.890013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.890241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.890293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.890558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.890864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.890892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.891197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.891532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.891600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.891879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.892185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.892235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.892494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.892746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.892776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.893050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.893316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.893364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.893592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.893853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.893882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.894158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.894424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.894478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.894759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.895043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.895093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.895394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.895661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.895691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.895979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.896248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.896298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.896584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.896861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.896889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.897196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.897470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.897526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.897810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.898091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.898149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.898448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.898790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.898819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.899089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.899357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.899412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.869 qpair failed and we were unable to recover it. 00:21:55.869 [2024-04-16 12:49:54.899661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.899924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.869 [2024-04-16 12:49:54.899954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.900247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.900480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.900534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.900808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.901082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.901128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.901431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.901697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.901727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.902036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.902311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.902360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.902667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.902940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.902969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.903288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.903574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.903603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.903860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.904138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.904188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.904482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.904755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.904785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.905036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.905315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.905364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.905668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.905928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.905957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.906262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.906536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.906597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.906881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.907150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.907199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.907495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.907792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.907822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.908091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.908393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.908450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.908732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.909019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.909060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.909338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.909597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.909626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.909924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.910234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.910293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.910540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.910827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.910856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.911116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.911376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.911427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:55.870 [2024-04-16 12:49:54.911681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.911957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.870 [2024-04-16 12:49:54.912007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:55.870 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.912303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.912557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.912601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.138 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.912902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.913170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.913222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.138 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.913443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.913710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.913740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.138 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.913971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.914237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.914287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.138 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.914548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.914853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.914882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.138 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.915194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.915484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.915534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.138 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.915812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.916077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.916124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.138 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.916424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.916744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.138 [2024-04-16 12:49:54.916773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.138 qpair failed and we were unable to recover it. 00:21:56.138 [2024-04-16 12:49:54.917040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.917309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.917357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.917636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.917938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.917966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.918242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.918502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.918557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.918869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.919184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.919236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.919488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.919741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.919771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.920037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.920345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.920401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.920703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.921006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.921035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.921303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.921561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.921608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.921913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.922197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.922247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.922553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.922855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.922884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.923194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.923489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.923538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.923819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.924070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.924121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.924388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.924629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.924659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.924910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.925225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.925274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.925579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.925801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.925830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.926078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.926384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.926432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.926694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.927002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.927053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.927331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.927650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.927680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.927998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.928255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.928304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.928551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.928817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.928846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.929104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.929414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.929464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.929728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.930011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.930061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.930279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.930491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.930542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.930815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.931112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.931164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.931465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.931726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.931756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.932089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.932373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.932425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.932685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.932949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.933000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.933264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.933529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.933558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.139 qpair failed and we were unable to recover it. 00:21:56.139 [2024-04-16 12:49:54.933963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.139 [2024-04-16 12:49:54.934260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.934312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.934626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.934900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.934929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.935199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.935433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.935481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.935644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.935804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.935832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.936008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.936270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.936299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.936555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.936773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.936802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.937003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.937273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.937303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.937592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.937770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.937799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.938008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.938161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.938208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.938429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.938648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.938678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.938854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.939048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.939132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.939444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.939692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.939722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.939922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.940212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.940259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.940503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.940717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.940745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.941000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.941278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.941327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.941623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.941827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.941855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.942086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.942358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.942403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.942682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.942857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.942885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.943146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.943452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.943501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.943727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.943946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.943993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.944293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.944523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.944552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.944740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.944969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.945018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.945325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.945581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.945611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.945817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.946057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.946108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.946263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.946537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.946578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.946764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.946955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.947015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.947245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.947489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.947541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.947714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.947952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.948002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.140 qpair failed and we were unable to recover it. 00:21:56.140 [2024-04-16 12:49:54.948259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.948527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.140 [2024-04-16 12:49:54.948561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.948774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.948924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.948951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.949260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.949576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.949606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.949792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.949950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.950008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.950217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.950486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.950539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.950729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.950945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.950997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.951291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.951543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.951589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.951752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.952019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.952069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.952329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.952615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.952644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.952799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.953083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.953131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.953392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.953625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.953654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.953865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.954129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.954177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.954488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.954694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.954723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.954899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.955152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.955201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.955459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.955647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.955676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.955929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.956198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.956246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.956468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.956691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.956720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.957039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.957331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.957382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.957640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.957873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.957902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.958151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.958392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.958439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.958701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.958916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.958965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.959250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.959503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.959530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.959707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.959973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.960022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.960291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.960580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.960610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.960866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.961105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.961155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.961466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.961704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.961733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.961966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.962265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.962317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.962620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.962888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.962917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.141 qpair failed and we were unable to recover it. 00:21:56.141 [2024-04-16 12:49:54.963147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.963395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.141 [2024-04-16 12:49:54.963445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.963713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.963998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.964048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.964308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.964607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.964637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.964897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.965174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.965224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.965520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.965761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.965791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.966111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.966405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.966453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.966721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.966950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.966999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.967233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.967453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.967513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.967801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.968101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.968151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.968415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.968708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.968738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.969000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.969333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.969385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.969626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.969864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.969893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.970164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.970323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.970375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.970595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.970857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.970886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.971161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.971416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.971467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.971768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.972044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.972092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.972317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.972586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.972616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.972859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.973152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.973201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.973469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.973670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.973698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.973933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.974252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.974302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.974574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.974868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.974897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.975132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.975436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.975486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.975727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.976010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.976059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.976363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.976581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.976615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.976875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.977185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.977235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.142 qpair failed and we were unable to recover it. 00:21:56.142 [2024-04-16 12:49:54.977546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.142 [2024-04-16 12:49:54.977836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.977866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.978120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.978304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.978353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.978579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.978869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.978898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.979202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.979452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.979499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.979775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.979965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.980025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.980298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.980546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.980604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.980905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.981143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.981194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.981481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.981749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.981778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.982108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.982393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.982447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.982700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.982865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.982893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.983125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.983428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.983478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.983741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.983967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.984021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.984256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.984467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.984496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.984772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.985048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.985102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.985370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.985600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.985630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.985882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.986123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.986172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.986405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.986668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.986697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.986870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.987160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.987211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.987437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.987690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.987720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.987970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.988180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.988229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.988452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.988629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.988658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.988962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.989280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.989338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.143 [2024-04-16 12:49:54.989608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.989893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.143 [2024-04-16 12:49:54.989953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.143 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.990219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.990470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.990530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.990687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.990940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.990990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.991248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.991517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.991584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.991894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.992128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.992177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.992479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.992738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.992768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.993038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.993289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.993337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.993604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.993857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.993885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.994121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.994431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.994499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.994798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.995076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.995128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.995396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.995701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.995731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.996036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.996340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.996400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.996671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.996844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.996872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.997147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.997466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.997515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.997766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.998031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.998082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.998315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.998475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.998503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.998750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.999023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.999072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.999344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.999617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:54.999646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:54.999880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.000161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.000213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.000459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.000726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.000757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.001017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.001248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.001297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.001530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.001707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.001736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.002004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.002339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.002388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.002657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.002894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.002923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.003235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.003477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.003529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.003762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.003961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.004009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.004226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.004521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.004580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.004862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.005162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.005212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.005475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.005727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.005757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.144 [2024-04-16 12:49:55.006012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.006266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.144 [2024-04-16 12:49:55.006317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.144 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.006584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.006796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.006825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.007095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.007281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.007330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.007630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.007945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.007975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.008228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.008498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.008547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.008815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.009123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.009178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.009447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.009718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.009748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.010047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.010366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.010414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.010691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.010946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.010999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.011273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.011547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.011585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.011840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.012145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.012207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.012495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.012774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.012804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.013101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.013373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.013423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.013740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.014007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.014056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.014302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.014557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.014603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.014908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.015138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.015190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.015408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.015703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.015733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.016008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.016266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.016316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.016584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.016825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.016855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.017135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.017445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.017496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.017766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.018073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.018122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.018403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.018695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.018725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.018994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.019213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.019265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.019513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.019818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.019848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.020114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.020425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.020474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.020754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.020975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.021026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.021275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.021540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.021578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.021808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.022028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.022078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.022383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.022643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.022673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.145 qpair failed and we were unable to recover it. 00:21:56.145 [2024-04-16 12:49:55.022985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.023235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.145 [2024-04-16 12:49:55.023289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.023603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.023878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.023907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.024212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.024520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.024579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.024839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.025142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.025190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.025492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.025779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.025808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.026097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.026363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.026412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.026686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.027001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.027053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.027357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.027667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.027697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.027993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.028264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.028312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.028620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.028863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.028892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.029171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.029442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.029491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.029757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.029999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.030048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.030348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.030634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.030665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.030968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.031247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.031294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.031592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.031857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.031886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.032153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.032425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.032477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.032781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.033107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.033155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.033466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.033713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.033743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.034038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.034310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.034360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.034664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.034923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.034952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.035223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.035559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.035643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.035913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.036214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.036262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.036512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.036758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.036788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.037013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.037311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.037359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.037581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.037735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.037763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.038025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.038307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.038362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.038549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.038728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.038757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.038934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.039153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.039201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.146 [2024-04-16 12:49:55.039391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.039628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.146 [2024-04-16 12:49:55.039658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.146 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.039831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.040027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.040091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.040273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.040532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.040573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.040763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.041086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.041135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.041404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.041670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.041699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.041855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.042040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.042068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.042268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.042438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.042467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.042651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.042801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.042829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.042996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.043196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.043224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.043393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.043554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.043604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.043782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.043991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.044042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.044246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.044417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.044445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.044625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.044775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.044803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.045012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.045165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.045189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.045370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.045514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.045542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.045712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.045861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.045888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.046054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.046216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.046271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.046487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.046627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.046669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.046866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.047039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.047099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.047269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.047468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.047496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.047644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.047790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.047818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.047999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.048206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.048255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.048434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.048583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.048624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.048768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.048972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.049030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.049190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.049349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.049377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.049551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.049756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.049794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.049934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.050100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.050128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.050299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.050432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.050460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.050643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.050838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.050867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.051002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.051123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.147 [2024-04-16 12:49:55.051145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.147 qpair failed and we were unable to recover it. 00:21:56.147 [2024-04-16 12:49:55.051345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.051485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.051513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.051686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.051860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.051897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.052041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.052223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.052271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.052471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.052663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.052692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.052841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.052997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.053026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.053159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.053333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.053361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.053502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.053664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.053694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.053890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.054079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.054144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.054323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.054452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.054481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.054660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.054822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.054851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.055055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.055234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.055283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.055437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.055575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.055601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.055771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.055979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.056038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.056236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.056418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.056446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.056617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.056812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.056840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.057027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.057159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.057198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.057394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.057539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.057585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.057722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.057928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.057994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.058173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.058372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.058400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.058595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.058729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.058773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.058907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.059091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.059120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.059254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.059392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.059420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.059601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.059765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.148 [2024-04-16 12:49:55.059794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.148 qpair failed and we were unable to recover it. 00:21:56.148 [2024-04-16 12:49:55.059951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.060099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.060142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.060305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.060439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.060467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.060635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.060795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.060824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.060962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.061131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.061188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.061359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.061498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.061537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.061681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.061842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.061870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.062032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.062233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.062297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.062427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.062616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.062645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.062867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.063015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.063072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.063200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.063355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.063384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.063547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.063719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.063748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.063930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.064094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.064145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.064304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.064444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.064467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.064642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.064807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.064835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.065023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.065204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.065263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.065401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.065578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.065607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.065753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.065889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.065912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.066085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.066240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.066268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.066432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.066559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.066596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.066785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.066952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.067002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.067169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.067295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.067317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.067503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.067660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.067690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.067875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.068056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.068107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.149 [2024-04-16 12:49:55.068314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.068466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.149 [2024-04-16 12:49:55.068495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.149 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.068662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.068849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.068877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.069023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.069180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.069208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.069367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.069522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.069549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.069716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.069850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.069878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.070041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.070241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.070316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.070479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.070614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.070642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.070787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.070984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.071042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.071232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.071446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.071474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.071637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.071764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.071788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.071988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.072169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.072229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.072401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.072578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.072607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.072796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.072991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.073042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.073228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.073387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.073415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.073579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.073737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.073766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.073926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.074103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.074165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.074326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.074464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.074492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.074662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.074818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.074860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.075023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.075160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.075188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.075353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.075529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.075558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.075730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.075898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.075955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.076140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.076270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.076310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.076451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.076640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.076669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.076844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.076975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.077004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.077161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.077298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.077326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.077478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.077619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.077644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.077818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.078013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.078074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.078240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.078374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.150 [2024-04-16 12:49:55.078402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.150 qpair failed and we were unable to recover it. 00:21:56.150 [2024-04-16 12:49:55.078596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.078726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.078759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.078947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.079120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.079180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.079342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.079506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.079535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.079741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.079887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.079948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.080127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.080303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.080353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.080495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.080678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.080719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.080912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.081045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.081072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.081233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.081392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.081420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.081596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.081807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.081836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.081992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.082194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.082254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.082421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.082582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.082615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.082746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.082901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.082929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.083085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.083332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.083361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.083626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.083801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.083830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.083990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.084173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.084202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.084432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.084655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.084684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.084845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.085057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.085086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.085302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.085453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.085481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.085679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.085843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.085871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.086057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.086283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.086336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.086574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.086747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.086776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.087039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.087264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.087313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.151 [2024-04-16 12:49:55.087539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.087725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.151 [2024-04-16 12:49:55.087754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.151 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.087917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.088098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.088164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.088415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.088669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.088698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.088923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.089095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.089147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.089386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.089572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.089610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.089737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.089900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.089928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.090095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.090350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.090400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.090652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.090807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.090835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.091024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.091233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.091283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.091544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.091701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.091730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.091958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.092205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.092262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.092458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.092641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.092685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.092817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.092984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.093015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.093190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.093375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.093404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.093557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.093720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.093748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.093943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.094144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.094197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.094415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.094633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.094662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.094814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.095003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.095063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.095279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.095512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.095540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.095746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.095929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.095989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.096168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.096309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.096337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.096494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.096654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.096684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.096820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.096977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.097026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.097242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.097378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.097406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.097636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.097843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.097872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.098081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.098257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.098307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.098467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.098617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.098647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.098829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.099125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.099174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.152 qpair failed and we were unable to recover it. 00:21:56.152 [2024-04-16 12:49:55.099418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.099626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.152 [2024-04-16 12:49:55.099662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.099883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.100112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.100178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.100366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.100506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.100534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.100720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.100889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.100928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.101095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.101303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.101364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.101544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.101713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.101741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.101972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.102157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.102208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.102379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.102637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.102666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.102889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.103108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.103157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.103379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.103544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.103580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.103785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.104006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.104057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.104323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.104473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.104506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.104695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.104908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.104963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.105164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.105355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.105405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.105550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.105701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.105729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.105910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.106032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.106069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.106221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.106466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.106495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.153 qpair failed and we were unable to recover it. 00:21:56.153 [2024-04-16 12:49:55.106689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.106904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.153 [2024-04-16 12:49:55.106958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.107187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.107420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.107448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.107671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.107853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.107882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.108130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.108383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.108433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.108657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.108829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.108857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.109032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.109243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.109294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.109498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.109647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.109677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.109843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.110038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.110102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.110285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.110462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.110490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.110673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.110898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.110959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.111185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.111374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.111421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.111646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.111795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.111826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.112053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.112251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.112301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.112478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.112672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.112701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.112932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.113103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.113155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.113306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.113507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.113536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.113688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.113857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.113881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.114109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.114348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.114398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.114605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.114774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.114803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.114970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.115179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.115230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.115402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.115627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.115655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.115822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.115997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.116047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.116296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.116474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.116497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.116705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.116853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.116877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.117096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.117340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.117364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.154 [2024-04-16 12:49:55.117612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.117779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.154 [2024-04-16 12:49:55.117803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.154 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.118069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.118337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.118386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.118642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.118799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.118823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.119070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.119304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.119355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.119535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.119736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.119761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.119966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.120160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.120208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.120344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.120550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.120622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.120856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.121096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.121149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.121356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.121552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.121590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.121774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.121964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.121993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.122197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.122401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.122448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.122661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.122832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.122872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.123055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.123271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.123317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.123556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.123734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.123766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.124009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.124196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.124243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.124406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.124627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.124664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.124875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.125104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.125156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.125355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.125578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.125627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.125767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.125895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.125923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.126102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.126320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.126380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.126535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.126736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.126767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.126938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.127124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.155 [2024-04-16 12:49:55.127172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.155 qpair failed and we were unable to recover it. 00:21:56.155 [2024-04-16 12:49:55.127351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.127515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.127543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.127779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.127972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.128019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.128219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.128373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.128401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.128624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.128784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.128809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.128999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.129190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.129236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.129377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.129552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.129606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.129749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.129946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.130007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.130213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.130405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.130436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.130628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.130764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.130790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.131002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.131165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.131194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.131359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.131550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.131590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.131793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.132003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.132068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.132261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.132397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.132426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.132595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.132769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.132795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.133018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.133181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.133232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.133421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.133586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.133630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.133762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.133921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.133949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.134115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.134275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.134302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.134440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.134632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.134659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.134793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.134991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.135019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.135157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.135339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.135367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.135528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.135691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.135718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.135877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.136059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.136087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.136261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.136427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.136455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.136629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.136785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.136811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.137015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.137217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.137265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.156 [2024-04-16 12:49:55.137427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.137633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.156 [2024-04-16 12:49:55.137659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.156 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.137857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.138007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.138064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.138229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.138388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.138416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.138579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.138753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.138779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.138958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.139153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.139201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.139371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.139524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.139552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.139752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.139904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.139944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.140122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.140304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.140365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.140555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.140743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.140769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.140958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.141120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.141170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.141362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.141544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.141583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.141784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.141951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.142005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.142205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.142386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.142414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.142613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.142779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.142805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.142963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.143114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.143142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.143335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.143495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.143523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.143696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.143866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.143889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.144094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.144257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.144315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.144502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.144676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.144702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.144830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.144995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.145024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.145226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.145405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.145433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.145628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.145809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.145850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.146020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.146176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.146204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.146399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.146558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.146615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.146800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.146999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.147051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.147245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.147418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.147446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.147629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.147818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.147857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.157 [2024-04-16 12:49:55.148056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.148238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.157 [2024-04-16 12:49:55.148286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.157 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.148449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.148608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.148638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.148803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.148953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.148976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.149150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.149330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.149358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.149489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.149677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.149706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.149844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.150024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.150082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.150252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.150395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.150437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.150596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.150785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.150813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.150996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.151195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.151243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.151400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.151561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.151601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.151765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.151952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.151975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.152176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.152359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.152416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.152599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.152754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.152783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.152930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.153127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.153180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.153348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.153532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.153560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.153731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.153859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.153887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.154076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.154256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.154314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.154477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.154660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.154689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.154832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.155007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.155043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.155206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.155368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.155396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.155556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.155764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.155793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.155967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.156165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.156212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.158 qpair failed and we were unable to recover it. 00:21:56.158 [2024-04-16 12:49:55.156381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.158 [2024-04-16 12:49:55.156536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.156588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.156765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.156960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.157016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.157209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.157391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.157451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.157609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.157769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.157798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.157933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.158076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.158099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.158241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.158428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.158456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.158637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.158775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.158803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.158978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.159114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.159142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.159345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.159467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.159507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.159695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.159936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.159964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.160159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.160321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.160349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.160540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.160708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.160737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.160923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.161109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.161162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.161369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.161584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.161614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.161746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.161993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.162050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.162284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.162468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.162496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.162685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.162847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.162871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.163061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.163331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.163382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.163594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.163755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.163784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.164047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.164359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.164408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.164661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.164852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.164880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.165179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.165481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.165530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.165751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.166001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.166051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.166308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.166547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.166585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.166787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.167036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.167085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.167331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.167541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.167589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.167785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.168036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.168098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.159 qpair failed and we were unable to recover it. 00:21:56.159 [2024-04-16 12:49:55.168406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.159 [2024-04-16 12:49:55.168608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.168638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.168882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.169084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.169135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.169394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.169554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.169592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.169781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.170033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.170085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.170339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.170556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.170594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.170793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.171019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.171073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.171260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.171552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.171630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.171796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.171975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.172032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.172181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.172406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.172460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.172704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.172835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.172863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.173052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.173214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.173242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.173433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.173619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.173681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.173873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.174040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.174093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.174287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.174407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.174446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.174613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.174775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.174804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.174961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.175140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.175202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.175435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.175589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.175628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.175799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.175951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.175987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.176155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.176309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.176336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.176499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.176664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.176693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.176823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.176980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.177030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.177175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.177349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.177390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.177605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.177766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.177795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.177955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.178149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.178202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.178393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.178573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.178602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.178802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.178918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.178941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.179082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.179220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.179248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.179486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.179752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.160 [2024-04-16 12:49:55.179782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.160 qpair failed and we were unable to recover it. 00:21:56.160 [2024-04-16 12:49:55.180033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.180356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.180403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.180648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.180815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.180867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.181084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.181251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.181301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.181536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.181670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.181699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.181872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.182054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.182104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.182246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.182421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.182461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.182674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.182835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.182863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.183000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.183154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.183183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.183346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.183497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.183525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.183726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.183914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.183963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.184167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.184461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.184491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.184809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.185048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.185098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.185329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.185655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.185684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.185900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.186156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.186205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.186389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.186680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.186712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.186858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.187128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.187178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.187483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.187717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.187746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.187921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.188177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.188228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.188473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.188704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.188732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.188967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.189262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.189313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.189528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.189723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.189751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.190001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.190262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.190310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.190502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.190658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.190687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.190846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.191140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.191196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.191496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.191745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.191774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.192064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.192402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.192450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.192658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.192846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.192874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.161 [2024-04-16 12:49:55.193153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.193470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.161 [2024-04-16 12:49:55.193529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.161 qpair failed and we were unable to recover it. 00:21:56.162 [2024-04-16 12:49:55.193701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.194025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.194089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.162 qpair failed and we were unable to recover it. 00:21:56.162 [2024-04-16 12:49:55.194404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.194648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.194677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.162 qpair failed and we were unable to recover it. 00:21:56.162 [2024-04-16 12:49:55.194849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.195169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.195215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.162 qpair failed and we were unable to recover it. 00:21:56.162 [2024-04-16 12:49:55.195491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.195759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.195793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.162 qpair failed and we were unable to recover it. 00:21:56.162 [2024-04-16 12:49:55.196063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.196355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.196406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.162 qpair failed and we were unable to recover it. 00:21:56.162 [2024-04-16 12:49:55.196680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.196929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.162 [2024-04-16 12:49:55.196974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.162 qpair failed and we were unable to recover it. 00:21:56.162 [2024-04-16 12:49:55.197253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.197578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.197617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.197859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.198147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.198192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.198492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.198802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.198841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.199106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.199377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.199424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.199617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.199802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.199857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.200141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.200435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.200483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.200748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.201008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.201057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.201299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.201532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.201560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.201739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.202055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.202106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.202373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.202651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.202680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.202913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.203273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.203320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.203593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.203820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.203848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.204091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.204284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.204311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.204540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.204719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.204747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.205000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.205247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.205295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.205497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.205687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.205716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.205893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.206117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.206167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.206398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.206585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.206614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.206810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.207009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.207060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.207229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.207406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.207428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.207628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.207791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.207819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.208123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.208420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.208471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.208698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.208871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.208900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.432 qpair failed and we were unable to recover it. 00:21:56.432 [2024-04-16 12:49:55.209207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.209473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.432 [2024-04-16 12:49:55.209502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.209696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.209858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.209887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.210100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.210394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.210443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.210616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.210931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.210989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.211274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.211450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.211478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.211650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.211828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.211895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.212161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.212457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.212508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.212700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.212907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.212958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.213147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.213441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.213489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.213684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.213823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.213851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.214078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.214298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.214351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.214548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.214745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.214773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.215000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.215276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.215323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.215540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.215701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.215730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.215918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.216221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.216271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.216485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.216699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.216729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.216928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.217184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.217234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.217495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.217720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.217749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.218013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.218295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.218349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.218550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.218798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.218828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.219052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.219321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.219371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.219579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.219747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.219775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.219935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.220138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.220188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.220412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.220608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.220637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.220831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.221011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.221064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.221242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.221407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.221440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.221630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.221763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.221791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.221949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.222145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.222192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.222391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.222577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.222606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.433 qpair failed and we were unable to recover it. 00:21:56.433 [2024-04-16 12:49:55.222850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.433 [2024-04-16 12:49:55.223064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.223115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.223332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.223531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.223559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.223781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.223990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.224040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.224266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.224418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.224455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.224593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.224903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.224953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.225220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.225501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.225551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.225806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.226113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.226163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.226433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.226611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.226664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.226832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.227049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.227095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.227405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.227656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.227685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.227959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.228225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.228276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.228547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.228741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.228770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.228931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.229159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.229208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.229445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.229675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.229713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.229993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.230292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.230343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.230602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.230804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.230833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.231068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.231323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.231374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.231638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.231934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.231962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.232264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.232650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.232680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.232978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.233286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.233338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.233636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.233911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.233940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.234220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.234508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.234556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.234822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.235073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.235123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.235387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.235642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.235679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.235933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.236161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.236208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.236482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.236699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.236728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.237027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.237382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.237433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.237731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.238024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.238075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.238387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.238701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.238731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.434 qpair failed and we were unable to recover it. 00:21:56.434 [2024-04-16 12:49:55.238978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.434 [2024-04-16 12:49:55.239221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.239271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.239538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.239864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.239893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.240196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.240440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.240490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.240785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.241057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.241107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.241370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.241623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.241652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.241928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.242184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.242235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.242539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.242813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.242841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.243145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.243448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.243498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.243766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.244043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.244091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.244370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.244659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.244689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.244972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.245228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.245276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.245523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.245786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.245816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.246077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.246394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.246446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.246696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.247001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.247049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.247324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.247553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.247589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.247913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.248188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.248237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.248486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.248793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.248822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.249138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.249440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.249490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.249796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.250082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.250136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.250435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.250698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.250727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.250974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.251260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.251310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.251583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.251841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.251870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.252163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.252434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.252484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.252794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.253072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.253120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.253295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.253528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.253557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.253868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.254126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.254175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.254488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.254796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.254825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.255137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.255457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.255506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.255794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.256066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.435 [2024-04-16 12:49:55.256122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.435 qpair failed and we were unable to recover it. 00:21:56.435 [2024-04-16 12:49:55.256444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.256776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.256806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.257081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.257268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.257318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.257613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.257876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.257906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.258163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.258447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.258498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.258674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.258921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.258972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.259229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.259486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.259536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.259806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.260122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.260172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.260467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.260722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.260752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.261076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.261410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.261459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.261735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.262038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.262088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.262390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.262657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.262687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.262934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.263244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.263296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.263578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.263853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.263882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.264159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.264460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.264510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.264749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.264923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.264972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.265216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.265500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.265550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.265799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.266012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.266062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.266290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.266547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.266584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.266834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.267111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.267162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.267466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.267737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.267765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.267930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.268241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.268288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.268605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.268835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.268863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.269040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.269248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.436 [2024-04-16 12:49:55.269301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.436 qpair failed and we were unable to recover it. 00:21:56.436 [2024-04-16 12:49:55.269573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.269868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.269898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.270156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.270428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.270478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.270791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.271077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.271128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.271337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.271576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.271605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.271782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.271960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.272010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.272290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.272601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.272631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.272933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.273246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.273297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.273573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.273773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.273802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.274078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.274390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.274439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.274695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.274944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.274994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.275219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.275493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.275544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.275768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.276001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.276050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.276320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.276589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.276620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.276856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.277045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.277095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.277253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.277523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.277577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.277881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.278095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.278144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.278441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.278747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.278777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.279000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.279337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.279389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.279693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.279981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.280030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.280342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.280643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.280674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.280896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.281203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.281254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.281581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.281880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.281908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.282174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.282471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.282520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.282830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.283141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.283192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.283451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.283674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.283704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.283982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.284314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.284362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.284633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.284869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.284897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.285128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.285407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.285461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.285731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.286022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.437 [2024-04-16 12:49:55.286072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.437 qpair failed and we were unable to recover it. 00:21:56.437 [2024-04-16 12:49:55.286368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.286687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.286716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.286971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.287258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.287309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.287604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.287852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.287881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.288183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.288487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.288534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.288850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.289126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.289175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.289450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.289742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.289773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.290028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.290337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.290387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.290684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.290946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.290976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.291201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.291489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.291540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.291816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.292044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.292096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.292409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.292710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.292741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.293001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.293218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.293270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.293537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.293840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.293870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.294175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.294452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.294503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.294801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.295035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.295086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.295336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.295596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.295626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.295875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.296171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.296220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.296488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.296698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.296728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.297030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.297291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.297339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.297613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.297916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.297945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.298209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.298536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.298598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.298919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.299170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.299220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.299485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.299753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.299783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.300085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.300390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.300441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.300742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.301050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.301099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.301403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.301731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.301761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.302054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.302295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.302347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.302643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.302910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.302939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.303253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.303529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.438 [2024-04-16 12:49:55.303558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.438 qpair failed and we were unable to recover it. 00:21:56.438 [2024-04-16 12:49:55.303936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.304289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.304342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.304644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.305046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.305091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.305413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.305729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.305760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.306083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.306419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.306469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.306727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.307042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.307089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.307392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.307683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.307714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.307962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.308236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.308285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.308542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.308790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.308820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.309111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.309409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.309461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.309767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.310087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.310137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.310398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.310670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.310700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.310981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.311279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.311327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.311625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.311940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.311969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.312272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.312509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.312560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.312826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.313139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.313189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.313495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.313817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.313847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.314147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.314432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.314483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.314788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.315066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.315116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.315375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.315638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.315668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.315982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.316242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.316292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.316586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.316817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.316851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.317154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.317459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.317511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.317813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.318071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.318122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.318401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.318729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.318759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.319065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.319392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.319440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.319715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.320024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.320074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.320365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.320624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.320654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.320949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.321256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.321303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.321599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.321903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.439 [2024-04-16 12:49:55.321932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.439 qpair failed and we were unable to recover it. 00:21:56.439 [2024-04-16 12:49:55.322193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.322488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.322537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.322835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.323146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.323197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.323517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.323830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.323859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.324116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.324380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.324432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.324714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.324985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.325038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.325244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.325490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.325540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.325765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.326027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.326077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.326287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.326485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.326513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.326699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.326921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.326972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.327171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.327379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.327429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.327641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.327853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.327880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.328078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.328321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.328372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.328616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.328822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.328851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.329062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.329326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.329355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.329569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.329783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.329812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.330068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.330273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.330322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.330531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.330712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.330740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.330986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.331219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.331268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.331474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.331682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.331712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.331964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.332174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.332222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.332462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.332703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.332733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.332900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.333066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.333098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.333293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.333497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.333525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.333709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.333885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.333923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.334077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.334205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.334232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.334426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.334589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.334617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.334807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.334986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.335044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.335219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.335387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.335414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.335577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.335766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.440 [2024-04-16 12:49:55.335794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.440 qpair failed and we were unable to recover it. 00:21:56.440 [2024-04-16 12:49:55.339594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.339767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.339798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.339976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.340188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.340219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.340428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.340620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.340650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.340826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.341049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.341081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.341274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.341508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.341536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.341729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.341916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.341944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.342182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.342431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.342461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.342685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.342832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.342861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.342996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.343188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.343216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.343412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.343606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.343635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.343801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.343934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.343957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.344146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.344306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.344335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.344525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.344695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.344722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.344975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.345213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.345246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.345461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.345696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.345729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.345930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.346073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.346101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.346293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.346459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.346487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.346654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.346819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.346847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.347015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.347171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.347198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.347373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.347523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.347546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.347769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.347926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.347964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.348151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.348278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.348301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.348456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.348658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.348684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.348918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.349149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.349176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.441 qpair failed and we were unable to recover it. 00:21:56.441 [2024-04-16 12:49:55.349441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.441 [2024-04-16 12:49:55.349642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.349667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.349886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.350070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.350114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.350347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.350579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.350632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.350787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.350944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.350973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.351136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.351292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.351320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.351478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.351641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.351667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.351818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.352008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.352043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.352234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.352457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.352485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.352712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.352858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.352884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.353034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.353237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.353283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.353469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.353686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.353712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.353917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.354162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.354206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.354395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.354576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.354619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.354754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.354954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.354998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.355246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.355492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.355539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.355710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.355892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.355932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.356140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.356386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.356432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.356654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.356862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.356886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.357128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.357340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.357362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.357632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.357787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.357811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.358110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.358345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.358369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.358587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.358778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.358803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.358993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.359143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.359166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.359380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.359617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.359658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.359815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.359995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.360018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.360185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.360331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.360355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.360520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.360668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.360694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.360892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.361112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.361135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.361371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.361549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.361580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.442 qpair failed and we were unable to recover it. 00:21:56.442 [2024-04-16 12:49:55.361792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.442 [2024-04-16 12:49:55.361970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.361994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.362234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.362433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.362456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.362683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.362844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.362882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.363088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.363297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.363320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.363505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.363734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.363759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.363932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.364091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.364114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.364316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.364462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.364484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.364663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.364837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.364862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.365034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.365184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.365206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.365388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.365576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.365600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.365793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.366006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.366028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.366239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.366422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.366446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.366661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.366791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.366815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.366989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.367110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.367133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.367310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.367459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.367485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.367671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.367865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.367888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.368066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.368246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.368269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.368468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.368619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.368644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.368843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.369072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.369096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.369308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.369507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.369531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.369759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.370010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.370033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.370217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.370454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.370481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.370739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.370971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.370994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.371229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.371430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.371453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.371696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.371869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.371893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.372062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.372224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.372248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.372457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.372662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.372688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.372865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.373053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.373075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.443 qpair failed and we were unable to recover it. 00:21:56.443 [2024-04-16 12:49:55.373267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.443 [2024-04-16 12:49:55.373447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.373470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.373668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.373901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.373924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.374171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.374361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.374383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.374578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.374744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.374770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.374981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.375182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.375204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.375411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.375625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.375651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.375842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.376005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.376028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.376202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.376327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.376351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.376503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.376661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.376687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.376859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.377037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.377060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.377218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.377370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.377408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.377576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.377742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.377768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.377951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.378180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.378218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.378425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.378625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.378651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.378817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.379080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.379103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.379330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.379530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.379554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.379777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.379988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.380012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.380287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.380499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.380524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.380731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.380936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.380959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.381202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.381358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.381381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.381633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.381870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.381894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.382138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.382300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.382323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.444 qpair failed and we were unable to recover it. 00:21:56.444 [2024-04-16 12:49:55.382578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.382800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.444 [2024-04-16 12:49:55.382825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.383053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.383266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.383289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.383520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.383761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.383787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.384009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.384220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.384241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.384447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.384677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.384702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.384900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.385050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.385073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.385195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.385350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.385374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.385572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.385763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.385787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.385930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.386066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.386091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.386242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.386395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.386433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.386594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.386722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.386747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.386950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.387112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.387135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.387323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.387490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.387512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.387689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.387811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.387836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.388011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.388186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.388208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.388411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.388552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.388600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.388781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.389008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.389031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.389245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.389438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.389460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.389719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.389930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.389953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.390147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.390412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.390436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.390659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.390860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.390884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.391029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.391205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.391229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.445 [2024-04-16 12:49:55.391358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.391488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.445 [2024-04-16 12:49:55.391515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.445 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.391682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.391813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.391851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.392003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.392128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.392151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.392367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.392572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.392613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.392851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.393075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.393098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.393344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.393608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.393634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.393889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.394152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.394175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.394396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.394602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.394627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.394846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.395060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.395083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.395295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.395484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.395506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.395701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.395935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.395977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.396248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.396509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.396532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.396742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.396976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.396999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.397218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.397469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.397504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.397737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.397945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.397969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.398206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.398387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.398410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.398619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.398768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.398792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.446 qpair failed and we were unable to recover it. 00:21:56.446 [2024-04-16 12:49:55.399063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.446 [2024-04-16 12:49:55.399266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.399291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.399528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.399781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.399806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.399973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.400183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.400206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.400418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.400618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.400658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.400906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.401109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.401131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.401297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.401496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.401518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.401696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.401864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.401894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.402114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.402306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.402329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.402540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.402721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.402745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.402982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.403242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.403266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.403578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.403796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.403820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.404044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.404198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.404234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.404454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.404701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.404726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.404904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.405089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.405112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.405335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.405570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.405609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.405844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.406059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.406082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.406341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.406555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.406587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.406785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.406978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.407017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.407257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.407426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.407448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.407667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.407909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.407947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.408152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.408340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.408363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.408522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.408764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.408789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.408997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.409205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.409228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.409448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.409616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.409640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.409838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.410026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.410049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.410251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.410481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.410504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.410738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.410903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.410926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.411109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.411307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.411331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.411523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.411763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.411787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.447 qpair failed and we were unable to recover it. 00:21:56.447 [2024-04-16 12:49:55.412004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.447 [2024-04-16 12:49:55.412174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.412197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.412435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.412638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.412663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.412866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.413124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.413147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.413355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.413570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.413594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.413794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.413967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.414005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.414248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.414416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.414458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.414675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.414890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.414928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.415108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.415278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.415315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.415570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.415757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.415781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.415986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.416141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.416163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.416406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.416600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.416624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.416927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.417206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.417229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.417478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.417705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.417744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.417935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.418161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.418184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.418390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.418597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.418632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.418896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.419095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.419123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.419368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.419625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.419649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.419888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.420051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.420073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.420287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.420456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.420479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.420703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.420872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.420909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.421120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.421311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.421348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.421589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.421764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.421789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.421960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.422123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.422146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.422399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.422613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.422638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.422880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.423168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.423192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.423436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.423671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.423695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.423922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.424153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.424176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.424371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.424612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.424638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.424828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.425066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.425089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.448 qpair failed and we were unable to recover it. 00:21:56.448 [2024-04-16 12:49:55.425242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.448 [2024-04-16 12:49:55.425394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.425442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.425630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.425832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.425856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.426099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.426259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.426281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.426525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.426794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.426820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.427071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.427357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.427380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.427655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.427869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.427892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.428110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.428375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.428398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.428660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.428867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.428904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.429077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.429304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.429343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.429540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.429755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.429786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.429966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.430175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.430198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.430419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.430606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.430631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.430859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.431078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.431101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.431308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.431533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.431579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.431766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.431976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.432001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.432242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.432472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.432495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.432640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.432763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.432787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.432985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.433179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.433203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.433416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.433642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.433666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.433880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.434065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.434089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.434371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.434628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.434668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.434902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.435145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.435169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.435410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.435609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.435633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.435816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.436065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.436089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.436378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.436660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.436684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.436874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.437096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.437120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.437341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.437554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.437583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.437797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.438030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.438056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.438293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.438487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.438511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.449 qpair failed and we were unable to recover it. 00:21:56.449 [2024-04-16 12:49:55.438769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.449 [2024-04-16 12:49:55.438964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.438987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.439209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.439407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.439431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.439638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.439879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.439903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.440096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.440263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.440286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.440499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.440729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.440753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.440979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.441166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.441191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.441436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.441631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.441654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.441861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.442071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.442095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.442344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.442618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.442664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.442907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.443094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.443117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.443320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.443515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.443538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.443761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.443998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.444021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.444235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.444480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.444504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.444789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.445027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.445052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.445278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.445499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.445536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.445799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.445976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.446016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.446220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.446412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.446436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.446684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.446899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.446922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.447161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.447426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.447450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.447662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.447868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.447891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.448156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.448428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.448452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.448723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.448965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.448990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.449242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.449448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.449470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.449715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.449952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.449975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.450189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.450402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.450426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.450642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.450895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.450918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.451126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.451335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.451359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.451587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.451818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.451842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.450 qpair failed and we were unable to recover it. 00:21:56.450 [2024-04-16 12:49:55.452019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.452212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.450 [2024-04-16 12:49:55.452234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.452475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.452686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.452710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.452897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.453145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.453169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.453424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.453632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.453657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.453860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.454045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.454069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.454322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.454560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.454591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.454836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.455070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.455093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.455267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.455460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.455483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.455652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.455852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.455875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.456040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.456278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.456302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.456546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.456760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.456785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.456947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.457143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.457166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.457380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.457537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.457560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.457794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.458023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.458047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.458275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.458479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.458516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.458713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.458955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.458978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.459221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.459414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.459437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.459687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.459887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.459910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.460150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.460412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.460437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.460721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.460935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.460958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.461201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.461399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.461422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.461643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.461843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.461881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.462122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.462291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.462313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.462555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.462802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.462827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.451 qpair failed and we were unable to recover it. 00:21:56.451 [2024-04-16 12:49:55.463031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.463235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.451 [2024-04-16 12:49:55.463259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.463505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.463756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.463781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.464035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.464210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.464234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.464447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.464681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.464706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.464920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.465160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.465182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.465434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.465635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.465660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.465875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.466147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.466170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.466384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.466592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.466620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.466877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.467041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.467065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.467282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.467539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.467585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.467789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.468010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.468033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.468249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.468462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.468486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.468697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.468896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.468918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.469168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.469374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.469397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.469634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.469835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.469857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.470110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.470277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.470309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.470521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.470730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.470755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.470981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.471176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.471200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.471452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.471641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.471665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.471864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.472098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.472122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.472366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.472553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.472598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.472820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.472993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.473016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.473302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.473542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.473589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.473788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.473976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.474000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.474284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.474580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.474617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.474825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.475034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.475058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.475281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.475470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.475493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.475725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.475955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.475978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.452 qpair failed and we were unable to recover it. 00:21:56.452 [2024-04-16 12:49:55.476158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.476354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.452 [2024-04-16 12:49:55.476377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.476601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.476802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.476826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.477051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.477291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.477315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.477557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.477765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.477788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.478039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.478229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.478253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.478550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.478760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.478784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.478998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.479225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.479247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.479418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.479641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.479666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.479904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.480055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.480078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.480297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.480544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.480573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.480793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.480989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.481012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.481250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.481474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.481498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.481737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.481935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.481958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.482180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.482404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.482427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.482652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.482852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.482877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.483070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.483311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.483349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.483576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.483805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.483830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.484073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.484264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.484287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.484529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.484752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.484777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.484979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.485180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.485221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.485482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.485710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.485749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.485925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.486132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.486155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.486397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.486599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.486638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.486826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.487032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.487057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.487233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.487478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.487502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.487753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.487940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.487965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.488149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.488362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.453 [2024-04-16 12:49:55.488403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.453 qpair failed and we were unable to recover it. 00:21:56.453 [2024-04-16 12:49:55.488647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.488884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.488924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.489140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.489337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.489378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.489585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.489792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.489816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.490038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.490228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.490257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.490501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.490735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.490759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.490976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.491172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.491195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.491394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.491641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.491666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.491889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.492142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.492165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.492438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.492699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.492724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.492902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.493104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.493141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.493360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.493624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.493649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.493895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.494089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.494111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.729 qpair failed and we were unable to recover it. 00:21:56.729 [2024-04-16 12:49:55.494326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.729 [2024-04-16 12:49:55.494520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.494543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.494803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.494970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.494992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.495199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.495396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.495420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.495624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.495827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.495865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.496117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.496299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.496322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.496590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.496800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.496824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.497046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.497272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.497295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.497501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.497705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.497729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.497941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.498170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.498194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.498434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.498633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.498657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.498910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.499151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.499174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.499376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.499589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.499615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.499820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.500013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.500036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.500270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.500474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.500497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.500728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.500920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.500943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.501112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.501267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.501304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.501526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.501741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.501765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.501984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.502186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.502210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.502475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.502659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.502684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.502831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.503014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.503038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.503193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.503353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.503392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.503639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.503815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.503845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.504064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.504357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.504382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.504617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.504805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.504830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.505056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.505227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.505250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.505437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.505600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.505625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.505772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.505986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.506008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.730 [2024-04-16 12:49:55.506170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.506347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.730 [2024-04-16 12:49:55.506371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.730 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.506524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.506701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.506726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.506915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.507080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.507118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.507308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.507477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.507501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.507666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.507836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.507875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.508082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.508276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.508300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.508504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.508662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.508686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.508856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.509043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.509066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.509277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.509457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.509481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.509647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.509813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.509854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.510034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.510188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.510211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.510394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.510528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.510570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.510750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.510928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.510951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.511124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.511306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.511330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.511515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.511707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.511732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.511956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.512160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.512186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.512397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.512592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.512617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.512804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.512971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.512993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.513182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.513346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.513371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.513548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.513701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.513724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.513848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.514029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.514065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.514259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.514436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.514460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.514685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.514912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.514936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.515185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.515377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.515402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.515644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.515822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.515847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.731 qpair failed and we were unable to recover it. 00:21:56.731 [2024-04-16 12:49:55.516021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.731 [2024-04-16 12:49:55.516151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.516179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.516431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.516632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.516671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.516863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.517035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.517059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.517295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.517514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.517537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.517769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.517938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.517976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.518188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.518356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.518380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.518574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.518763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.518789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.518987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.519115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.519153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.519331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.519461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.519485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.519669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.519842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.519868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.520030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.520209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.520233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.520412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.520559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.520616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.520782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.520915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.520941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.521107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.521280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.521304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.521484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.521641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.521667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.521835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.522004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.522027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.522207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.522383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.522406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.522558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.522756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.522782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.522944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.523075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.523099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.523268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.523430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.523469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.523682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.523816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.523841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.524009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.524154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.524192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.524380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.524538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.524569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.524749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.524903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.524927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.525144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.525301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.525325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.525486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.525688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.525714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.525884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.526036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.526059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.526241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.526419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.526442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.526624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.526787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.732 [2024-04-16 12:49:55.526824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.732 qpair failed and we were unable to recover it. 00:21:56.732 [2024-04-16 12:49:55.526973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.527125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.527149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.527328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.527507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.527531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.528265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.528464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.528488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.528677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.528828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.528879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.529067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.529222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.529261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.529449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.529631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.529657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.529838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.530015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.530054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.530256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.530402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.530440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.530653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.530787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.530824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.531029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.531159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.531197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.531386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.531558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.531599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.531780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.531952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.531977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.532200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.532429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.532452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.532662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.532824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.532863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.533069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.533236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.533259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.533442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.533648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.533674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.533823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.534003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.534028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.534174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.534352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.534375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.534583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.534705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.534731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.534934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.535138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.535162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.535335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.535497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.535520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.535721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.535893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.535933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.536095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.536264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.536291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.536441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.536626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.536653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.536830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.537011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.537038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.537235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.537404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.537427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.733 qpair failed and we were unable to recover it. 00:21:56.733 [2024-04-16 12:49:55.537630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.733 [2024-04-16 12:49:55.537791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.537828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.538029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.538220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.538244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.538426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.538610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.538636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.538800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.538934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.538958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.539147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.539312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.539351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.539542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.539693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.539718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.539902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.540053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.540077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.540268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.540449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.540474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.540648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.540782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.540808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.540988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.541167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.541190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.541406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.541584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.541615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.541763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.541986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.542010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.542169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.542331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.542368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.542572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.542714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.542739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.542875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.543065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.543087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.543243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.543395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.543418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.543588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.543744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.543769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.543908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.544083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.734 [2024-04-16 12:49:55.544105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.734 qpair failed and we were unable to recover it. 00:21:56.734 [2024-04-16 12:49:55.544324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.544524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.544570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.544750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.544930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.544953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.545128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.545297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.545322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.545509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.545650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.545676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.545887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.546109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.546132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.546327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.546525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.546569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.546709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.546888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.546926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.547126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.547316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.547339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.547501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.547671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.547697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.547874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.548063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.548085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.548273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.548449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.548471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.548663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.548845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.548883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.549052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.549261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.549284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.549454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.549620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.549645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.549801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.549950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.549987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.550161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.550331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.550368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.550527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.550720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.550745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.550901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.551075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.551099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.551269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.551481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.551504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.551683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.551816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.551841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.552022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.552177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.552199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.552407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.552619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.552644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.552772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.552950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.552972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.553176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.553341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.553379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.553578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.553726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.553750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.553939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.554079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.554101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.554278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.554408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.554431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.554640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.554798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.554822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.735 qpair failed and we were unable to recover it. 00:21:56.735 [2024-04-16 12:49:55.555025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.555221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.735 [2024-04-16 12:49:55.555246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.555383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.555579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.555618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.555798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.555996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.556019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.556181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.556387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.556409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.556582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.556722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.556747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.556938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.557088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.557110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.557247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.557419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.557457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.557638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.557769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.557793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.557998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.558187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.558211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.558395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.558573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.558599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.558769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.558904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.558927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.559115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.559292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.559328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.559521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.559699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.559724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.559890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.560109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.560132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.560357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.560578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.560603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.560752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.560942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.560966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.561184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.561353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.561376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.561526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.561712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.561738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.561883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.562099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.562122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.562309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.562475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.562499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.562681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.562867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.562891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.563050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.563238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.563260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.563402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.563546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.563593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.563722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.563904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.563941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.564147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.564340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.564363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.564535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.564692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.564716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.564901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.565055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.565078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.565286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.565460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.565483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.565669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.565839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.565864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.736 qpair failed and we were unable to recover it. 00:21:56.736 [2024-04-16 12:49:55.566098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.736 [2024-04-16 12:49:55.566243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.566267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.566444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1269778 Killed "${NVMF_APP[@]}" "$@" 00:21:56.737 [2024-04-16 12:49:55.566628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.566663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.566798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 12:49:55 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:21:56.737 [2024-04-16 12:49:55.566976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.567003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 12:49:55 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:56.737 12:49:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:56.737 [2024-04-16 12:49:55.567178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 12:49:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:56.737 [2024-04-16 12:49:55.567343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.567367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:21:56.737 [2024-04-16 12:49:55.567577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.567718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.567742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.567909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.568134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.568158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.568333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.568475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.568499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.568690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.568829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.568854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.568990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.569163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.569186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.569379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.569533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.569581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.569719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.569875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.569899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.570094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.570269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.570294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.570488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.570622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.570647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.570779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.570935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.570960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.571130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.571334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.571357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 12:49:55 -- nvmf/common.sh@470 -- # nvmfpid=1270391 00:21:56.737 12:49:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:56.737 [2024-04-16 12:49:55.571528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 12:49:55 -- nvmf/common.sh@471 -- # waitforlisten 1270391 00:21:56.737 [2024-04-16 12:49:55.571697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.571722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 12:49:55 -- common/autotest_common.sh@817 -- # '[' -z 1270391 ']' 00:21:56.737 [2024-04-16 12:49:55.571867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 12:49:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.737 [2024-04-16 12:49:55.572028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.572053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 12:49:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:56.737 [2024-04-16 12:49:55.572236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 12:49:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.737 [2024-04-16 12:49:55.572386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.572411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 12:49:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:56.737 [2024-04-16 12:49:55.572617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:21:56.737 [2024-04-16 12:49:55.572744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.572770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.572966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.573127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.573152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.573339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.573476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.573499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.573685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.573812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.573838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.577580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.577747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.577777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.577942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.578111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.578136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.578328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.578508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.578535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.737 qpair failed and we were unable to recover it. 00:21:56.737 [2024-04-16 12:49:55.578702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.737 [2024-04-16 12:49:55.578886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.578914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.579141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.579309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.579336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.579501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.579671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.579699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.579859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.580053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.580078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.580242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.580432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.580459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.580611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.580759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.580786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.580981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.581123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.581147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.581302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.581463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.581502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.581662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.581806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.581832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.582015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.582180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.582207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.582426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.582578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.582605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.582741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.582921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.582949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.583134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.583348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.583377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.583559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.583713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.583739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.583941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.584099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.584124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.584290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.584428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.584457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.584665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.584805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.584832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.584979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.585126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.585151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.585320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.585470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.585497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.585677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.585862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.585888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.586044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.587580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.587620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.587774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.587952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.587978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.588172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.588345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.588374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.588531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.588665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.588692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.588870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.589049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.589089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.589257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.589414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.589440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.589645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.589798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.589825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.738 qpair failed and we were unable to recover it. 00:21:56.738 [2024-04-16 12:49:55.589996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.738 [2024-04-16 12:49:55.590155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.590181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.590345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.590478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.590503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.590671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.590798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.590825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.592580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.592770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.592798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.592950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.593133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.593157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.593290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.593449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.593473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.593635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.593771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.593797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.593951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.594133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.594158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.594332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.594499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.594524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.594694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.594834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.594873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.595035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.595182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.595208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.595364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.595553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.595587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.595732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.595884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.595924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.596092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.597579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.597607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.597755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.597935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.597959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.598128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.598283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.598308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.598490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.598712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.598740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.598919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.599063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.599086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.599235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.599389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.599413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.599606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.599731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.599756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.599907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.600075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.600098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.600259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.600388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.600412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.600598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.600727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.600752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.600926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.601080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.601105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.601270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.601424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.601449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.739 qpair failed and we were unable to recover it. 00:21:56.739 [2024-04-16 12:49:55.601607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.739 [2024-04-16 12:49:55.601749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.601774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.601925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.602103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.602128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.602283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.602429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.602454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.602623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.602760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.602785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.602913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.603716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.603745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.603944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.604102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.604126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.604247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.604383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.604408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.604547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.604686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.604712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.604858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.605035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.605058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.605202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.605353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.605377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.605510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.605651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.605676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.605824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.605970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.605993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.606150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.606308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.606332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.606462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.606625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.606651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.606786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.606937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.606979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.607160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.607283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.607306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.607474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.607618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.607644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.607784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.607950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.607975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.608163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.608326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.608351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.608483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.608623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.608649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.608778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.608909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.608933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.609065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.609207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.609230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.609404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.609526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.609552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.609701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.609836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.609861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.610033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.610193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.610218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.610401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.610540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.610591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.610715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.610840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.610864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.611014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.611154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.611179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.740 qpair failed and we were unable to recover it. 00:21:56.740 [2024-04-16 12:49:55.611320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.740 [2024-04-16 12:49:55.611465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.611490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.611655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.611801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.611826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.611944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.612101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.612140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.612297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.612414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.612437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.612620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.612761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.612785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.612917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.613066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.613105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.613223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.613343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.613367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.613506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.613638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.613664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.613806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.613923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.613947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.614093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.614207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.614232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.614417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.614561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.614606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.614741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.614865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.614890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.615053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.615222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.615247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.615370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.615492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.615517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.615651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.615800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.615824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.615994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.616165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.616189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.616359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.616480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.616503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.616662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.616806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.616831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.617028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.617178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.617202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.617354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.617514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.617538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.617680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.617817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.617842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.618059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.618237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.618260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.618449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.618588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.618614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.618737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.618897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.618921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.619082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.619228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.741 [2024-04-16 12:49:55.619252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.741 qpair failed and we were unable to recover it. 00:21:56.741 [2024-04-16 12:49:55.619403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.619592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.619617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.619763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.619877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.619901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.620073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.620223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.620246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.620382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.620547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.620578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.620702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.620840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.620880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.621012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.621142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.621166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.621358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.621517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.621540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.621691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.621821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.621846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.621972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.622197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.622221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.622387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.622504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.622527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.623316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.623498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.623524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.624210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.624456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.624495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.624666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.624782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.624811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.624956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.625104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.625142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.625284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.625419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.625442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.625624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.625750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.625775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.625826] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:21:56.742 [2024-04-16 12:49:55.625907] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.742 [2024-04-16 12:49:55.625942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.626088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.626110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.626274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.626393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.626416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.626587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.626726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.626751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.626875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.627019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.627044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.627219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.627367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.627392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.627544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.627672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.627697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.627869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.627995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.628019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.628204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.628384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.628408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.628547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.628698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.628723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.628839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.628990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.629015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.629164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.629313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.629338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.629463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.629627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.629653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.742 qpair failed and we were unable to recover it. 00:21:56.742 [2024-04-16 12:49:55.629782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.742 [2024-04-16 12:49:55.629926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.629965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.630204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.630327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.630351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.630501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.630639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.630665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.630796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.630914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.630938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.631101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.631777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.631806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.631987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.632137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.632160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.632310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.632416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.632439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.632597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.632721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.632745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.632873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.632991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.633015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.633165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.633313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.633338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.633461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.633592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.633618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.633751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.633896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.633919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.634100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.634221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.634245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.634374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.634523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.634547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.634683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.634817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.634841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.634958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.635196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.635219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.635386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.635558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.635589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.635714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.635835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.635874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.636101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.636248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.636287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.636413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.636609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.636636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.636764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.636917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.636955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.637115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.637257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.637281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.637442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.637573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.637598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.637827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.637991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.638016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.638165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.638295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.638320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.638478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.638603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.638629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.638772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.638908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.638933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.639084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.639206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.639231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.639370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.639493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.743 [2024-04-16 12:49:55.639518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.743 qpair failed and we were unable to recover it. 00:21:56.743 [2024-04-16 12:49:55.639646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.639769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.639793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.639965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.640085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.640110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.640259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.640406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.640430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.640575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.640698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.640723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.640846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.640978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.641001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.641138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.641283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.641310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.641487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.641620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.641645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.641778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.641917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.641955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.642122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.642295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.642319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.642496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.642621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.642646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.642776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.642929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.642967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.643124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.643249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.643272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.643427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.643580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.643606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.643738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.643852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.643876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.644020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.644261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.644283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.644453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.644585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.644611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.644730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.644849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.644873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.644991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.645142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.645166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.645301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.645448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.645471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.645609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.645734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.645758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.645925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.646058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.646083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.646256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.646381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.646405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.646522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.646663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.646689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.646817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.646957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.646981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.647131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.647246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.647270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.647416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.647536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.647560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.647700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.647828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.647853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.647997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.648141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.648165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.744 qpair failed and we were unable to recover it. 00:21:56.744 [2024-04-16 12:49:55.648311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.744 [2024-04-16 12:49:55.648464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.648488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.648651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.648774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.648798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.648956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.649115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.649153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.649319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.649437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.649462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.649598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.649725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.649750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.649884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.650016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.650039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.650258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.650394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.650418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.650602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.650724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.650749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.650907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.651044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.651080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.651215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.651339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.651362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.651527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.651758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.651783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.651930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.652076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.652100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.652239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.652377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.652400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.652520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.652675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.652701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.652846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.652988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.653012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.653160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.653281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.653306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.653477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.653609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.653634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.653814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.653988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.654012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.654130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.654264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.654289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.654422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.654622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.654647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.654768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.654926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.654966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.655204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.655438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.655463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.655699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.655876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.655900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.656069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.656230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.656255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.656392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.656553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.656585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.656752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.656884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.656908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.657046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.657195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.657219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.657395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.657573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.657598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.745 [2024-04-16 12:49:55.657724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.657845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.745 [2024-04-16 12:49:55.657873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.745 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.658050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.658223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.658248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.658370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.658514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.658539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.658670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.658797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.658821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.658996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.659155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.659178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.659321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.659499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.659538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.659674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.659828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.659852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.659999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.660104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.660127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.660296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.660436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.660461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.660620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.660766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.660791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.660939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.661113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.661137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.661295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.661521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.661559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.661700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.661859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.661883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.662041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.662190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.662213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.662374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.662518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.662542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.662699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.662827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.662852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.663013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.663160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.663184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.663305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.663444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.663469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.663640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.663753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.663778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.663946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.664113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.664137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.664325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.664447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.664487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.664649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.664793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.664817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.664990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.665160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.665183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.665352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.665472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.665495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.665629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.665750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.665775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.746 qpair failed and we were unable to recover it. 00:21:56.746 [2024-04-16 12:49:55.665921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.746 [2024-04-16 12:49:55.666063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.666087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.666236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.666350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.666373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.666534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.666707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.666747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.666917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.667077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.667101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.667240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.667398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.667423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.667576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.667723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.667763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.667924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.668093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.668116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.668273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.668387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.668411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.668586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.668710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.668735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.668872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.669013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.669037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.669187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.669356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.669380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.669576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.669736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.669761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.669898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.670055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.670079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.670226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.670376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.670400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.670586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.670744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.670769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.670917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.671066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.671091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.671241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.671376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.747 [2024-04-16 12:49:55.671415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.671575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.671695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.671719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.671874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.672012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.672036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.672205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.672349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.672387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.672556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.672711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.672736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.672876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.673031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.673054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.673222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.673411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.673436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.673590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.673704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.673728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.673860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.674007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.674031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.674196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.674342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.674366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.674529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.674702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.674728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.674883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.675002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.675027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.675180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.675361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.675386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.747 qpair failed and we were unable to recover it. 00:21:56.747 [2024-04-16 12:49:55.675536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.747 [2024-04-16 12:49:55.675671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.675696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.675864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.676018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.676057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.676245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.676364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.676389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.676553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.676687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.676712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.676838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.677009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.677031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.677182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.677336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.677359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.677529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.677691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.677716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.677889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.678001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.678028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.678193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.678349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.678372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.678539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.678744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.678769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.678883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.679014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.679038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.679194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.679332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.679355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.679510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.679705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.679730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.679859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.680059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.680083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.680237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.680408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.680446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.680589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.680722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.680760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.680927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.681088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.681127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.681291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.681425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.681450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.681618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.681759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.681783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.681945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.682114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.682138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.682269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.682426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.682449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.682583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.682770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.682794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.682950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.683084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.683122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.683276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.683432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.683455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.683588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.683716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.683741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.683874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.684043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.684081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.684268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.684391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.684416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.684590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.684737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.684762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.684891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.685076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.748 [2024-04-16 12:49:55.685111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.748 qpair failed and we were unable to recover it. 00:21:56.748 [2024-04-16 12:49:55.685258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.685429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.685452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.685622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.685770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.685795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.685952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.686097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.686133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.686290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.686445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.686483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.686657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.686828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.686869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.687036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.687195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.687219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.687380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.687571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.687596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.687737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.687876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.687900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.688037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.688152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.688175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.688363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.688481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.688519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.688744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.688871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.688895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.689013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.689162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.689186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.689379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.689517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.689540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.689729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.689899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.689939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.690053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.690200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.690238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.690415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.690551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.690596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.690758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.690905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.690944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.691108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.691257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.691281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.691411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.691580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.691606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.691748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.691892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.691916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.692106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.692344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.692367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.692574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.692700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.692738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.692882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.693035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.693057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.693251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.693397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.693420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.693620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.693764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.693789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.693925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.694081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.694104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.694282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.694428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.694451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.694587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.694732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.694757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.749 qpair failed and we were unable to recover it. 00:21:56.749 [2024-04-16 12:49:55.694890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.695026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.749 [2024-04-16 12:49:55.695049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.695215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.695324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.695350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.695496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.695626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.695652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.695804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.695949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.695973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.696109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.696268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.696290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.696557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.696696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.696720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.696882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.697005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.697028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.697227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.697384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.697408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.697603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.697759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.697783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.697928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.698064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.698087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.698228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.698393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.698416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.698571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.698739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.698767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.698935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.699095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.699119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.699278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.699430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.699469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.699589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.699750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.699775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.699947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.700119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.700141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.700276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.700414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.700436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.700594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.700710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.700732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.700861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.701007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.701029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.701194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.701338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.701361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.701509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.701664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.701687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.701832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.702011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.702034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.702195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.702334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.702356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.702511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.702640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.702665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.702823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.702998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.703021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.703205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.703380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.703401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.703536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.703682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.703705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.703849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.704036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.704058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.750 qpair failed and we were unable to recover it. 00:21:56.750 [2024-04-16 12:49:55.704187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.704410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.750 [2024-04-16 12:49:55.704432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.704649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.704764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.704788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.704981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.705138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.705160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.705370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.705599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.705623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.705787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.705959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.705982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.706124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.706334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.706356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.706534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.706713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.706736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.706896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.707045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.707083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.707230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.707362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.707384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.707554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.707725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.707748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.707920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.708061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.708100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.708242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.708411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.708433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.708623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.708746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.708769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.708942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.709094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.709116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.709278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.709438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.709476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.709667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.709776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.709800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.709955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.710110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.710132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.710292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.710420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.710442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.710613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.710716] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.751 [2024-04-16 12:49:55.710770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.710792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.710990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.711143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.711165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.711308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.711447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.711469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.711656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.711782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.711805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.711973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.712119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.712142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.712300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.712414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.712436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.712603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.712756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.712779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.712900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.713023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.713046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.751 qpair failed and we were unable to recover it. 00:21:56.751 [2024-04-16 12:49:55.713192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.751 [2024-04-16 12:49:55.713305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.713328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.713514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.713684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.713709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.713850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.714084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.714107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.714270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.714409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.714432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.714617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.714763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.714787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.714973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.715093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.715116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.715276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.715416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.715439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.715575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.715715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.715739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.715929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.716093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.716116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.716277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.716460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.716482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.716664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.716809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.716832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.716953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.717123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.717160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.717338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.717503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.717540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.717678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.717845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.717868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.718024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.718245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.718268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.718448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.718612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.718636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.718795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.718987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.719009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.719146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.719289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.719312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.719464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.719620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.719647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.719790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.719935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.719973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.720125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.720267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.720290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.720506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.720679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.720718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.720858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.721018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.721040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.721199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.721338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.721360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.721536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.721677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.721701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.721903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.722050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.722073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.722264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.722461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.722483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.722639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.722759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.722782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.722953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.723136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.752 [2024-04-16 12:49:55.723158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.752 qpair failed and we were unable to recover it. 00:21:56.752 [2024-04-16 12:49:55.723376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.723523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.723545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.723736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.723851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.723874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.724153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.724336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.724358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.724575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.724726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.724749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.724917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.725052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.725075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.725240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.725384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.725406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.725556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.725732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.725769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.725901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.726194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.726217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.726385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.726523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.726576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.726836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.726993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.727016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.727260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.727434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.727457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.727606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.727757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.727782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.727934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.728134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.728171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.728322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.728493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.728516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.728770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.728952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.728975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.729174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.729339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.729361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.729539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.729711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.729745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.729943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.730109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.730147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.730404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.730600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.730623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.730814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.730987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.731009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.731196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.731316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.731354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.731539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.731719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.731742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.731876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.732061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.732083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.732265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.732434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.732456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.732621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.732776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.732816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.732958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.733094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.733117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.733307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.733471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.733493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.733665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.733848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.733871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.753 [2024-04-16 12:49:55.734033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.734148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.753 [2024-04-16 12:49:55.734171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.753 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.734371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.734536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.734559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.734737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.734968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.734991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.735119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.735274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.735296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.735441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.735591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.735630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.735773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.735922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.735944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.736113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.736266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.736303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.736479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.736668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.736693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.736867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.737080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.737102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.737260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.737424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.737447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.737622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.737778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.737801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.737952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.738096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.738134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.738277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.738454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.738481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.738662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.738819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.738844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.739073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.739217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.739254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.739411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.739572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.739596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.739836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.740010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.740032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.740174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.740343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.740365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.740501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.740660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.740684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.740846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.741010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.741046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.741206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.741373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.741410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.741601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.741749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.741772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.741919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.742064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.742100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.742298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.742493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.742515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.742726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.742952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.742982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.743141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.743345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.743368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.743552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.743775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.743800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.743970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.744208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.744231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.744476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.744719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.744743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.754 qpair failed and we were unable to recover it. 00:21:56.754 [2024-04-16 12:49:55.745024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.745180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.754 [2024-04-16 12:49:55.745202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.745366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.745510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.745534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.745750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.745979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.746001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.746182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.746328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.746366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.746561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.746738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.746763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.746880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.747090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.747113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.747357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.747528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.747571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.747817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.747991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.748014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.748187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.748334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.748356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.748509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.748720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.748744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.748916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.749063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.749100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.749241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.749408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.749431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.749618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.749742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.749766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.749888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.750060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.750097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.750282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.750461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.750484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.750645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.750828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.750860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.751028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.751226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.751249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.751433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.751600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.751624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.751758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.751935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.751959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.752088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.752236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.752259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.752427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.752596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.752620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.752753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.752889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.752912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.753054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.753194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.753217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.753411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.753651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.753675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.753819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.753968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.755 [2024-04-16 12:49:55.753991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.755 qpair failed and we were unable to recover it. 00:21:56.755 [2024-04-16 12:49:55.754165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.754333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.754356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.754526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.754765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.754789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.754937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.755077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.755101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.755264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.755418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.755441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.755617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.755783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.755807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.756034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.756235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.756257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.756386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.756557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.756614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.756809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.756950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.756973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.757118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.757256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.757278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.757435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.757556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.757606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.757865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.758040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.758062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.758207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.758385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.758408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.758667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.758799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.758822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.759006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.759153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.759176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.759369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.759536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.759580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.759797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.759961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.759984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.760207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.760420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.760442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.760598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.760748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.760773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.760948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.761102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.761125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.761295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.761490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.761516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.761704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.761964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.761986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.762201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.762420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.762452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.762659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.762815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.762837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.763005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.763175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.763198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.763354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.763497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.763520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.763706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.763921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.763944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.764086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.764260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.764298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.764444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.764570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.764609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.756 qpair failed and we were unable to recover it. 00:21:56.756 [2024-04-16 12:49:55.764750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.764920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.756 [2024-04-16 12:49:55.764943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.765118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.765257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.765280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.765451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.765597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.765623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.765780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.765971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.765994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.766195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.766351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.766373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.766522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.766679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.766703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.766905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.767070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.767107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.767327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.767476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.767498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.767657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.767822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.767846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.768006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.768201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.768239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.768397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.768601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.768626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.768797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.768946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.768969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.769166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.769409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.769431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.769672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.769861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.769898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.770085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.770264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.770286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.770414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.770593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.770617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.770786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.770939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.770962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.771114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.771253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.771276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.771464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.771579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.771603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.771829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.772030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.772052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.772204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.772359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.772403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.772537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.772721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.772745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.772957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.773130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.773168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.773315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.773472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.773495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.773679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.773825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.773849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.773978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.774136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.774159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.774395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.774531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.774574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.774748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.774937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.774959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.775091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.775251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.757 [2024-04-16 12:49:55.775274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.757 qpair failed and we were unable to recover it. 00:21:56.757 [2024-04-16 12:49:55.775460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.775633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.775658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.775795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.775981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.776004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.776230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.776398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.776420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.776600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.776809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.776834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.776970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.777153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.777192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.777352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.777511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.777535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.777723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.777907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.777930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.778182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.778306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.778329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.778629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.778846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.778871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.779051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.779199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.779239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.779419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.779576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.779602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.779774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.779940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.779963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.780182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.780304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.780327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.780494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.780655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.780685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:56.758 qpair failed and we were unable to recover it. 00:21:56.758 [2024-04-16 12:49:55.780810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.780963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.758 [2024-04-16 12:49:55.780987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.781227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.781363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.781401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.781571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.781732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.781759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.781933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.782057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.782082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.782254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.782415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.782440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.782583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.782765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.782791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.782935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.783127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.783151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.783344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.783488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.783513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.783694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.783833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.783858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.784009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.784160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.784185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.028 qpair failed and we were unable to recover it. 00:21:57.028 [2024-04-16 12:49:55.784361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.784506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.028 [2024-04-16 12:49:55.784544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.784786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.784929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.784953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.785134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.785278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.785302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.785442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.785591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.785617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.785787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.786032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.786055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.786362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.786530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.786553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.786745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.786970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.786994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.787201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.787350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.787374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.787646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.787772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.787796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.787988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.788243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.788280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.788481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.788710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.788735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.788861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.789054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.789077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.789235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.789410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.789448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.789655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.789875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.789898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.790106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.790275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.790297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.790468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.790621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.790646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.790854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.791006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.791029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.791180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.791375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.791398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.791520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.791677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.791701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.791844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.792033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.792055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.792222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.792406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.792429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.792614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.792790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.792814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.792977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.793118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.793141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.793294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.793506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.793528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.793770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.793893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.793916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.794079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.794234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.794257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.794411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.794585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.794609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.794760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.794920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.794942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.795110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.795222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.795245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.795418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.795588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.795611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.795791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.795947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.795971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.796156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.796276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.796299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.796468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.796631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.796655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.029 qpair failed and we were unable to recover it. 00:21:57.029 [2024-04-16 12:49:55.796822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.029 [2024-04-16 12:49:55.796981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.797004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.797183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.797338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.797361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.797512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.797704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.797728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.797882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.798084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.798106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.798276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.798457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.798480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.798725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.798886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.798908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.799063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.799219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.799257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.799415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.799573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.799622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.799788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.800006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.800029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.800190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.800423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.800446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.800699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.800868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.800891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.801063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.801317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.801339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.801496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.801643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.801668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.801842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.802001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.802023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.802186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.802333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.802370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.802517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.802698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.802737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.802914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.803045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.803068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.803235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.803377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.803400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.803540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.803717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.803741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.803926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.804100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.804122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.804354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.804534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.804578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.804784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.804952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.804975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.805164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.805288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.805325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.805488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.805639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.805665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.805840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.806002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.806027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.806210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.806358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.806381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.806532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.806684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.806708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.806993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.807150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.807173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.807358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.807496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.807533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.030 qpair failed and we were unable to recover it. 00:21:57.030 [2024-04-16 12:49:55.807697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.030 [2024-04-16 12:49:55.807903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.807926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.808083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.808218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.808241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.808400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.808657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.808681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.808881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.809043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.809066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.809239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.809455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.809478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.809702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.809897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.809920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.810149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.810289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.810311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.810496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.810662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.810694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.810909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.811090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.811113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.811321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.811555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.811599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.811852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.812024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.812058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.812253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.812375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.812411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.812576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.812763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.812786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.813014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.813142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.813165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.813356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.813509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.813532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.813718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.813880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.813903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.814080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.814276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.814304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.814470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.814642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.814666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.814813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.814972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.814996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.815162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.815344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.815367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.815527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.815711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.815735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.815900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.816052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.816074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.816278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.816513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.816536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.816708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.816831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.816869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.817031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.817197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.817220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.817444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.817581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.817604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.817760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.818011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.818034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.818233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.818474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.818497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.031 qpair failed and we were unable to recover it. 00:21:57.031 [2024-04-16 12:49:55.818708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.818825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.031 [2024-04-16 12:49:55.818849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.819006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.819239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.819279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.819526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.819679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.819714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.819902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.820069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.820091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.820212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.820352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.820375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.820559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.820708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.820731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.820954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.821134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.821157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.821414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.821603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.821628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.821759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.821901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.821925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.822079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.822275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.822297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.822492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.822675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.822699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.822866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.823023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.823060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.823255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.823406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.823428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.823729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.823957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.823980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.824180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.824423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.824445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.824668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.824820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.824843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.825074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.825221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.825244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.825417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.825550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.825584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.825793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.825967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.825989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.826177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.826352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.826374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.826526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.826758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.826782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.827007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.827172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.827195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.827357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.827501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.827525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.827685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.827822] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.032 [2024-04-16 12:49:55.827858] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.032 [2024-04-16 12:49:55.827888] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.032 [2024-04-16 12:49:55.827900] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.032 [2024-04-16 12:49:55.827903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.827911] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.032 [2024-04-16 12:49:55.827926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.828005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:57.032 [2024-04-16 12:49:55.828095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.828229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.828259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.828064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:57.032 [2024-04-16 12:49:55.828162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:57.032 [2024-04-16 12:49:55.828165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:57.032 [2024-04-16 12:49:55.828409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.828577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.828603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.032 [2024-04-16 12:49:55.828748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.828896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.032 [2024-04-16 12:49:55.828921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.032 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.829068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.829196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.829222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.829348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.829521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.829546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.829707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.829856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.829885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.830031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.830171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.830196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.830349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.830525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.830550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.830722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.830869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.830893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.831100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.831245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.831270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.831412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.831617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.831643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.831768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.831903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.831928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.832083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.832208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.832233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.832386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.832555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.832588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.832728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.832876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.832900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.833081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.833219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.833244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.833390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.833540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.833580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.833725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.833850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.833874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.834049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.834195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.834219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.834391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.834514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.834539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.834708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.834815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.834840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.835009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.835179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.835203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.835322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.835437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.835462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.835640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.835794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.835819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.835998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.836141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.836165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.836280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.836430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.836455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.033 [2024-04-16 12:49:55.836609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.836756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.033 [2024-04-16 12:49:55.836780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.033 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.836928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.837079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.837103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.837228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.837374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.837398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.837542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.837691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.837717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.837856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.838019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.838044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.838196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.838341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.838366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.838483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.838653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.838678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.838798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.838912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.838936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.839115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.839247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.839272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.839395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.839569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.839594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.839750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.839924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.839949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.840105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.840270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.840295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.840438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.840583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.840609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.840758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.840927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.840952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.841122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.841269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.841294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.841466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.841589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.841615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.841758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.841875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.841900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.842089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.842254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.842279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.842430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.842606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.842631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.842755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.842902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.842927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.843043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.843190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.843215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.843334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.843473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.843498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.843621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.843775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.843799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.843952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.844091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.844115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.844260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.844419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.844444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.844576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.844698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.844723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.844868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.845026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.845051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.845200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.845343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.845368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.845512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.845631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.845657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.034 [2024-04-16 12:49:55.845791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.845965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.034 [2024-04-16 12:49:55.845990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.034 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.846158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.846308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.846342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.846488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.846628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.846654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.846798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.846946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.846971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.847121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.847233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.847258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.847427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.847576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.847601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.847722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.847862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.847887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.848030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.848146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.848171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.848319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.848445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.848469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.848584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.848731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.848756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.848905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.849023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.849047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.849195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.849339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.849363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.849520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.849648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.849674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.849827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.849970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.849995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.850109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.850254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.850278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.850428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.850572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.850598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.850775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.850897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.850922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.851076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.851223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.851248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.851421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.851591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.851617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.851762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.851910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.851934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.852107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.852232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.852257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.852377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.852545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.852576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.852721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.852891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.852915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.853063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.853209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.853234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.853413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.853532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.853557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.853759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.853882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.853907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.854061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.854203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.854228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.854403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.854587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.854612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.854778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.854900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.854924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.035 qpair failed and we were unable to recover it. 00:21:57.035 [2024-04-16 12:49:55.855094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.855218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.035 [2024-04-16 12:49:55.855242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.855381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.855528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.855552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.855731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.855877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.855901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.856080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.856227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.856251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.856426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.856575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.856600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.856772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.856897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.856921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.857043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.857214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.857239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.857410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.857555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.857598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.857741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.857855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.857879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.858058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.858181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.858205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.858351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.858491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.858516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.858708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.858869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.858893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.859034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.859201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.859225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.859383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.859531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.859556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.859713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.859885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.859909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.860061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.860191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.860215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.860376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.860529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.860554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.860686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.860865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.860890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.861037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.861179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.861203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.861348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.861497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.861521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.861713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.861859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.861884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.862028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.862146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.862171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.862287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.862407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.862431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.862556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.862684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.862712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.862858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.862999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.863024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.863138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.863280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.863304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.863461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.863602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.863628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.863775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.863926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.863950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.864097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.864221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.864246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.036 qpair failed and we were unable to recover it. 00:21:57.036 [2024-04-16 12:49:55.864419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.864570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.036 [2024-04-16 12:49:55.864596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.864758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.864900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.864925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.865050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.865194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.865219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.865369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.865524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.865548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.865670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.865820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.865844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.866017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.866166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.866191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.866311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.866453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.866478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.866597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.866742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.866767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.866893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.867036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.867061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.867206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.867347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.867372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.867525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.867704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.867730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.867873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.868028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.868052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.868229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.868365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.868390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.868573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.868745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.868769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.868930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.869042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.869067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.869223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.869340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.869364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.869510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.869653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.869678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.869791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.869941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.869965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.870123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.870248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.870272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.870391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.870538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.870569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.870744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.870886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.870911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.871081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.871206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.871231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.037 qpair failed and we were unable to recover it. 00:21:57.037 [2024-04-16 12:49:55.871377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.037 [2024-04-16 12:49:55.871551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.871583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.871731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.871873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.871897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.872068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.872249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.872274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.872405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.872536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.872561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.872713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.872838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.872863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.873046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.873191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.873215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.873364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.873536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.873560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.873747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.873878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.873903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.874056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.874181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.874206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.874376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.874523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.874547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.874703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.874848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.874872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.875043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.875185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.875210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.875387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.875570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.875596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.875747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.875887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.875911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.876047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.876192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.876217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.876389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.876505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.876530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.876715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.876837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.876862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.877007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.877157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.877182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.877333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.877451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.877475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.877625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.877767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.877792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.877944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.878089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.878114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.878261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.878413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.878437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.878561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.878713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.878738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.878871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.879022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.879050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.879197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.879364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.879389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.879540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.879689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.879714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.879864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.880005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.880029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.880187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.880333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.880357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.880522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.880643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.880668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.880808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.880941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.880965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.881136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.881257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.038 [2024-04-16 12:49:55.881281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.038 qpair failed and we were unable to recover it. 00:21:57.038 [2024-04-16 12:49:55.881427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.881580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.881605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.881783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.881922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.881947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.882116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.882263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.882291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.882409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.882587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.882612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.882785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.882907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.882932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.883084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.883253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.883277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.883424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.883575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.883600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.883741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.883890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.883915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.884091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.884208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.884232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.884379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.884540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.884571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.884745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.884894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.884918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.885063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.885211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.885236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.885397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.885553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.885586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.885737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.885898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.885923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.886070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.886216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.886241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.886415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.886611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.886637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.886759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.886885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.886910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.887059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.887217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.887242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.887353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.887516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.887540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.887698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.887870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.887895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.888020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.888160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.888184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.888362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.888503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.888528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.888698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.888822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.888847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.889025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.889153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.889178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.889335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.889475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.889499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.889678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.889804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.889829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.889986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.890112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.890136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.890287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.890458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.890482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.890631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.890772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.890797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.890945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.891092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.891117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.891236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.891409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.039 [2024-04-16 12:49:55.891434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.039 qpair failed and we were unable to recover it. 00:21:57.039 [2024-04-16 12:49:55.891593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.891767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.891792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.891907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.892045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.892070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.892200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.892328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.892353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.892480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.892593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.892619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.892762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.892907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.892931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.893050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.893221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.893245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.893388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.893547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.893601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.893757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.893902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.893927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.894076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.894229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.894254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.894364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.894510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.894535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.894694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.894866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.894890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.895040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.895180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.895204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.895359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.895538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.895571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.895722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.895874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.895898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.896075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.896189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.896214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.896364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.896505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.896529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.896687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.896835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.896860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.897010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.897131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.897156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.897295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.897413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.897438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.897606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.897730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.897755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.897895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.898065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.898089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.898238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.898376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.898401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.898575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.898729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.898758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.898908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.899028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.899053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.899207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.899351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.899375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.899520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.899664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.899689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.899818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.899976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.900001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.040 qpair failed and we were unable to recover it. 00:21:57.040 [2024-04-16 12:49:55.900152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.900307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.040 [2024-04-16 12:49:55.900332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.900479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.900648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.900673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.900802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.900980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.901004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.901150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.901295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.901320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.901469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.901617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.901642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.901767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.901911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.901936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.902080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.902233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.902258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.902416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.902590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.902615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.902767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.902940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.902965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.903114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.903243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.903267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.903412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.903535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.903560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.903711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.903831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.903856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.903981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.904100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.904125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.904271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.904418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.904442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.904572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.904719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.904743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.904891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.905036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.905061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.905237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.905419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.905444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.905601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.905776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.905801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.905946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.906094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.906119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.906270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.906410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.906434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.906580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.906747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.906772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.906891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.907038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.907063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.907211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.907358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.907382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.907550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.907712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.907737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.907886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.908036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.908060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.908176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.908321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.908345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.908521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.908676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.908701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.908875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.908997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.909022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.909145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.909284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.909308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.909487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.909634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.909659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.909810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.909958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.909983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.041 qpair failed and we were unable to recover it. 00:21:57.041 [2024-04-16 12:49:55.910114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.041 [2024-04-16 12:49:55.910231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.910256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.910434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.910577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.910603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.910748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.910860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.910885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.911005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.911119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.911143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.911272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.911439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.911463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.911620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.911778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.911803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.911933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.912053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.912078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.912224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.912372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.912397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.912517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.912687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.912712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.912846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.912997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.913021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.913164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.913310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.913335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.913486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.913604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.913629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.913783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.913930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.913954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.914069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.914237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.914261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.914392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.914517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.914542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.914728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.914873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.914915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.915069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.915212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.915236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.915424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.915556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.915589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.915734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.915879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.915903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.916051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.916166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.916190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.916348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.916463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.916487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.916646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.916791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.916815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.916961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.917083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.917107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.042 qpair failed and we were unable to recover it. 00:21:57.042 [2024-04-16 12:49:55.917302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.042 [2024-04-16 12:49:55.917423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.917447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.917625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.917807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.917831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.917954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.918099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.918124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.918299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.918442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.918466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.918636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.918781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.918806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.918983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.919128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.919152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.919309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.919451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.919475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.919598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.919731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.919755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.919928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.920086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.920110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.920260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.920431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.920455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.920643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.920815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.920840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.920989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.921112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.921137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.921260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.921402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.921426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.921620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.921776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.921801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.921941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.922080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.922105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.922292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.922463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.922487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.922612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.922760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.922784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.922969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.923135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.923159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.923292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.923435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.923460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.923606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.923751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.923775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.923923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.924084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.924108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.924266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.924402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.924426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.924601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.924775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.924799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.924916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.925033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.925058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.925206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.925349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.925373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.925575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.925698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.925722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.925886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.926025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.926049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.926219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.926340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.926364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.926552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.926687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.926712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.926867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.926990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.927015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.927165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.927307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.927332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.927472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.927593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.927619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.927772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.927911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.927935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.043 qpair failed and we were unable to recover it. 00:21:57.043 [2024-04-16 12:49:55.928089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.043 [2024-04-16 12:49:55.928246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.928271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.928420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.928573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.928599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.928776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.928928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.928952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.929088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.929212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.929236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.929385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.929513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.929537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.929703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.929853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.929877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.930051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.930202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.930226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.930375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.930523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.930547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.930745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.930883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.930907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.931054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.931174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.931199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.931349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.931495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.931523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.931684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.931824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.931849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.931973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.932092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.932117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.932265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.932407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.932432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.932578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.932727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.932752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.932882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.933022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.933047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.933166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.933316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.933340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.933462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.933614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.933639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.933814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.933955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.933980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.934153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.934274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.934298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.934423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.934574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.934603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.934720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.934871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.934895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.935071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.935213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.935238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.935383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.935524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.935549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.935687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.935798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.935822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.935998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.936127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.936152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.936323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.936496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.936520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.936647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.936790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.936814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.936969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.937087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.937111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.937229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.937403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.937427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.937540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.937704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.937729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.044 qpair failed and we were unable to recover it. 00:21:57.044 [2024-04-16 12:49:55.937872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.938041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.044 [2024-04-16 12:49:55.938066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.938184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.938295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.938319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.938493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.938607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.938633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.938783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.938954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.938979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.939129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.939282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.939307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.939460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.939603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.939628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.939754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.939897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.939921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.940062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.940185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.940210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.940383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.940532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.940557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.940733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.940914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.940939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.941119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.941267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.941292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.941416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.941572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.941598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.941740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.941879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.941903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.942080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.942223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.942247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.942366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.942510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.942535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.942713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.942869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.942893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.943069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.943189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.943213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.943444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.943597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.943623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.943743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.943915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.943939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.944114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.944255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.944279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.944397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.944528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.944553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.944735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.944887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.944911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.945059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.945228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.945252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.945398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.945541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.945594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.945746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.945897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.945921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.946067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.946215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.946240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.946388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.946503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.946528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.946674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.946793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.946817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.946937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.947083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.947107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.947247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.947400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.947424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.947575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.947729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.947754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.045 qpair failed and we were unable to recover it. 00:21:57.045 [2024-04-16 12:49:55.947868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.045 [2024-04-16 12:49:55.948007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.948031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.948153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.948279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.948303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.948427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.948594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.948619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.948742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.948868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.948892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.949011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.949164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.949188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.949359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.949512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.949536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.949668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.949815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.949840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.949951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.950079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.950103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.950248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.950418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.950442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.950592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.950706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.950734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.950878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.951027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.951052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.951223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.951343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.951367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.951483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.951611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.951636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.951813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.951959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.951984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.952106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.952220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.952245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.952397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.952509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.952534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.952687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.952831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.952855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.952976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.953121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.953145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.953329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.953448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.953473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.953621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.953801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.953826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.953948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.954097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.954121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.954261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.954400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.954424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.954577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.954728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.954753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.954928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.955108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.955133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.955309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.955450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.955474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.955610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.955751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.955775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 [2024-04-16 12:49:55.955953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.956077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 [2024-04-16 12:49:55.956102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.046 qpair failed and we were unable to recover it. 00:21:57.046 12:49:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:57.046 [2024-04-16 12:49:55.956241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.046 12:49:55 -- common/autotest_common.sh@850 -- # return 0 00:21:57.047 [2024-04-16 12:49:55.956358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.956383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 12:49:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:57.047 [2024-04-16 12:49:55.956522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.956649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.956675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 12:49:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:57.047 [2024-04-16 12:49:55.956836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:21:57.047 [2024-04-16 12:49:55.956987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.957013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.957138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.957255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.957280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.957435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.957585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.957611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.957755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.957876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.957900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.958039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.958152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.958176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.958345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.958495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.958519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.958685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.958799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.958823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.958944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.959090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.959115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.959230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.959399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.959425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.959554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.959733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.959758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.959929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.960076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.960106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.960250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.960393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.960417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.960536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.960663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.960688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.960841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.960980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.961005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.961150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.961306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.961331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.961455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.961605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.961631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.961761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.961876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.961901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.962072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.962218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.962242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.962358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.962502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.962527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.962681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.962825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.962849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.962991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.963136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.963164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.963316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.963456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.963480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.963611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.963726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.963751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.963873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.963995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.964021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.964177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.964306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.964330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.964502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.964652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.964677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.964819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.964988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.965014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.965159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.965297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.965321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.965471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.965618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.965644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.965784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.965898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.965923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.966043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.966162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.047 [2024-04-16 12:49:55.966187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.047 qpair failed and we were unable to recover it. 00:21:57.047 [2024-04-16 12:49:55.966342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.966469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.966494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.966627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.966753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.966777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.966922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.967045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.967069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.967244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.967388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.967413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.967579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.967697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.967722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.967850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.968010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.968035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.968153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.968323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.968348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.968498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.968637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.968662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.968796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.968958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.968983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.969131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.969249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.969273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.969408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.969532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.969557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.969692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.969817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.969841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.969962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.970108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.970132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.970282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.970408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.970432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.970552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.970687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.970712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.970835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.970979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.971004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.971172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.971344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.971369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.971519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.971652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.971677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.971811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.971952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.971977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.972154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.972308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.972334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.972487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.972642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.972669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.972796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.972918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.972943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.973069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.973218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.973243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.973401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.973548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.973589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.973719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.973852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.973877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.974054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.974178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.974202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.974332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.974477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.974502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.974653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.974783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.974808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.974957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.975096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.975121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 12:49:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.048 [2024-04-16 12:49:55.975274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 12:49:55 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:57.048 [2024-04-16 12:49:55.975419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.975444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 12:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.048 [2024-04-16 12:49:55.975620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:21:57.048 [2024-04-16 12:49:55.975740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.975767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.975922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.976045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.976069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.048 qpair failed and we were unable to recover it. 00:21:57.048 [2024-04-16 12:49:55.976209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.048 [2024-04-16 12:49:55.976358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.976383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.976525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.976674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.976700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.976818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.977026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.977051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.977225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.977391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.977416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.977597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.977733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.977758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.977898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.978043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.978067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.978215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.978365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.978389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.978539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.978693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.978718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.978851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.979019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.979044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.979185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.979331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.979355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.979499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.979634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.979658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.979792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.979923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.979947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.980093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.980264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.980288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.980430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.980550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.980581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.980714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.980838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.980863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.981012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.981127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.981152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.981440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.981589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.981614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.981744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.981870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.981894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.982021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.982189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.982214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.982326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.982445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.982469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.982628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.982751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.982775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.982925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.983065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.983090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.983237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.983379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.983404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.983552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.983702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.983727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.983848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.983989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.984013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.984159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.984269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.984293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.984444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.984571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.984595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.984732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.984851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.984875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.985030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.985172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.985197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.985322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.985445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.985469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.985621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.985739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.985764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.985918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.986138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.986163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.986281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.986451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.986475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.986629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.986754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.049 [2024-04-16 12:49:55.986779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.049 qpair failed and we were unable to recover it. 00:21:57.049 [2024-04-16 12:49:55.986924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.987039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.987064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.987219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.987358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.987382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.987534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.987653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.987679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.987831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.988040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.988064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.988214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.988363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.988388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.988543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.988666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.988691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.988818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.988963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.988987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.989133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.989337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.989362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.989507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.989636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.989662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.989823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.989965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.989990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.990158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.990303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.990328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.990443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.990589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.990614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.990741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.990865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.990889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.991012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.991184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.991209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.991356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.991474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.991507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.991665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.991815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.991840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.991955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.992068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.992093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.992245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.992368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.992392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.992511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.992664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.992689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.992815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.993024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.993049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.993197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.993309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.993333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.993514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.993637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.993663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.993811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.993957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.993981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.994132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.994253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.994277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.994417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.994577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.994602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.994748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.994893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.994918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.995102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.995249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.995273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.995419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.995553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.995585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.995729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.995857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.050 [2024-04-16 12:49:55.995881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.050 qpair failed and we were unable to recover it. 00:21:57.050 [2024-04-16 12:49:55.996026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.996164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.996189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.996308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.996425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.996449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.996575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.996721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.996745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.996866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.997036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.997060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.997188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.997313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.997337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.997514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.997659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.997684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.997833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.997987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.998012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.998158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.998382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.998406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.998556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.998694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.998719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.998891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.999038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.999063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.999234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 Malloc0 00:21:57.051 [2024-04-16 12:49:55.999379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.999405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:55.999547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.999726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:55.999751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 12:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.051 [2024-04-16 12:49:55.999881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.000027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.000052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 12:49:55 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.000172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 12:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.051 [2024-04-16 12:49:56.000342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.000367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:21:57.051 [2024-04-16 12:49:56.000539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.000722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.000747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.000920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.001101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.001126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.001309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.001476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.001501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.001674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.001827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.001851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.001970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.002142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.002167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.002315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.002433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.002457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.002603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.002756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.002781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.002900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.003018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.003043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.003093] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.051 [2024-04-16 12:49:56.003196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.003334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.003358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.003573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.003713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.003737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.003912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.004026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.004051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.004169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.004287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.004311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.004457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.004588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.004614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.004785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.004929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.004954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.005104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.005219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.005244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.005361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.005498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.005523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.005677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.005816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.005840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.005981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.006103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.006127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.051 qpair failed and we were unable to recover it. 00:21:57.051 [2024-04-16 12:49:56.006299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.006453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.051 [2024-04-16 12:49:56.006478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.006592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.006738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.006764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.006937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.007084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.007109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.007261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.007429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.007458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.007633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.007785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.007810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.007952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.008119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.008144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.008324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.008465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.008490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.008658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.008791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.008816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.008962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.009112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.009137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.009319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.009463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.009487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.009634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.009762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.009786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.009918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.010090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.010114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.010268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.010441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.010465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.010629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.010812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.010837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.010992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.011112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.011137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.011304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 12:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.052 [2024-04-16 12:49:56.011456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.011480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 12:49:56 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.052 [2024-04-16 12:49:56.011636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 12:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.052 [2024-04-16 12:49:56.011753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.011779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:21:57.052 [2024-04-16 12:49:56.011919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.012104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.012128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.012249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.012393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.012417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.012568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.012717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.012741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.012895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.013015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.013040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.013163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.013296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.013321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.013472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.013587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.013612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.013798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.013917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.013942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.014082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.014246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.014271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.014441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.014561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.014594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.014746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.014856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.014881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.015009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.015169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.015193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.015336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.015482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.015507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.015680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.015822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.015847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.015991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.016151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.016176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.016320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.016469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.016493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.016645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.016808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.016833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.052 [2024-04-16 12:49:56.016985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.017110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.052 [2024-04-16 12:49:56.017135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.052 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.017282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.017397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.017422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.017593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.017742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.017767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.017919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.018066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.018091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.018209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.018347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.018372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.018526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.018651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.018676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.018852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.018973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.018997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.019176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.019323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.019348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 12:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.019476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 12:49:56 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.053 [2024-04-16 12:49:56.019622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 12:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.053 [2024-04-16 12:49:56.019648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:21:57.053 [2024-04-16 12:49:56.019799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.019922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.019946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.020103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.020214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.020238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.020356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.020532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.020556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.020701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.020821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.020846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.020982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.021129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.021153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.021312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.021452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.021476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.021624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.021800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.021825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.021953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.022100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.022124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.022295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.022436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.022460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.022634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.022764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.022789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.022935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.023086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.023110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.023251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.023369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.023393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.023533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.023688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.023713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.023834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.023976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.024000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.024178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.024296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.024320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.024465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.024594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.024620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.024747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.024892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.024916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.025053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.025201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.025226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.025386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.025530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.025555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.025680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.025839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.025864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.026050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.026173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.026197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.026320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.026471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.026496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.026644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.026790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.026815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.026958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.027103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 [2024-04-16 12:49:56.027127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.053 qpair failed and we were unable to recover it. 00:21:57.053 [2024-04-16 12:49:56.027300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.053 12:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.053 [2024-04-16 12:49:56.027423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.027447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 12:49:56 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.054 [2024-04-16 12:49:56.027599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 12:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.054 [2024-04-16 12:49:56.027749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.027774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:21:57.054 [2024-04-16 12:49:56.027890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.028011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.028036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.028163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.028287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.028312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.028486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.028657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.028683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.028808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.028957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.028982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.029162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.029311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.029340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.029507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.029622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.029647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.029790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.029937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.029962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.030113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.030226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.030250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.030368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.030509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.030533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.030675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.030795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.030819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.030939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.031089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.031114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff5050 with addr=10.0.0.2, port=4420 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.031266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.054 [2024-04-16 12:49:56.031345] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.054 [2024-04-16 12:49:56.034278] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:21:57.054 [2024-04-16 12:49:56.034339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff5050 (107): Transport endpoint is not connected 00:21:57.054 [2024-04-16 12:49:56.034406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 12:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.054 12:49:56 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:57.054 12:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.054 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:21:57.054 12:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.054 12:49:56 -- host/target_disconnect.sh@58 -- # wait 1269928 00:21:57.054 [2024-04-16 12:49:56.043743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.054 [2024-04-16 12:49:56.043910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.054 [2024-04-16 12:49:56.043938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.054 [2024-04-16 12:49:56.043968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.054 [2024-04-16 12:49:56.043981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.054 [2024-04-16 12:49:56.044010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.053752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.054 [2024-04-16 12:49:56.053921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.054 [2024-04-16 12:49:56.053948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.054 [2024-04-16 12:49:56.053967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.054 [2024-04-16 12:49:56.053980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.054 [2024-04-16 12:49:56.054008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.063696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.054 [2024-04-16 12:49:56.063826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.054 [2024-04-16 12:49:56.063852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.054 [2024-04-16 12:49:56.063867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.054 [2024-04-16 12:49:56.063879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.054 [2024-04-16 12:49:56.063907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.073693] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.054 [2024-04-16 12:49:56.073823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.054 [2024-04-16 12:49:56.073849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.054 [2024-04-16 12:49:56.073863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.054 [2024-04-16 12:49:56.073876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.054 [2024-04-16 12:49:56.073903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.054 [2024-04-16 12:49:56.083762] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.054 [2024-04-16 12:49:56.083910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.054 [2024-04-16 12:49:56.083936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.054 [2024-04-16 12:49:56.083951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.054 [2024-04-16 12:49:56.083963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.054 [2024-04-16 12:49:56.083991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.054 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.093731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.093876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.093903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.093917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.093929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.093957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.103816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.104010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.104042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.104057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.104068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.104096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.113823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.113945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.113971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.113986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.113998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.114036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.123822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.123943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.123969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.123984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.123996] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.124024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.133821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.133943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.133976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.133992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.134004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.134034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.143836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.143996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.144023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.144039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.144051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.144079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.153871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.154046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.154073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.154088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.154100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.154128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.163937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.164074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.164101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.164116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.164128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.164155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.174008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.174193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.174220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.174234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.174246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.174274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.183954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.184095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.184121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.184136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.184148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.184177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.194012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.194147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.194174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.194189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.194201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.194229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.313 [2024-04-16 12:49:56.204055] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.313 [2024-04-16 12:49:56.204218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.313 [2024-04-16 12:49:56.204245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.313 [2024-04-16 12:49:56.204260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.313 [2024-04-16 12:49:56.204272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.313 [2024-04-16 12:49:56.204300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.313 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.214053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.214192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.214219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.214234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.214246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.214274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.224084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.224228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.224259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.224276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.224288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.224317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.234103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.234246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.234273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.234288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.234300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.234328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.244131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.244272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.244298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.244313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.244325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.244354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.254209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.254341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.254368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.254383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.254395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.254423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.264183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.264322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.264348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.264363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.264375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.264409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.274238] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.274377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.274404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.274419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.274431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.274459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.284268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.284414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.284441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.284456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.284468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.284496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.294275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.294407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.294434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.294449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.294461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.294490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.304283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.304425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.304452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.304467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.304479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.304508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.314302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.314453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.314486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.314502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.314514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.314543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.324352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.324487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.324514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.324529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.324542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.324590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.334355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.314 [2024-04-16 12:49:56.334489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.314 [2024-04-16 12:49:56.334516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.314 [2024-04-16 12:49:56.334531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.314 [2024-04-16 12:49:56.334543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.314 [2024-04-16 12:49:56.334580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.314 qpair failed and we were unable to recover it. 00:21:57.314 [2024-04-16 12:49:56.344413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.315 [2024-04-16 12:49:56.344561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.315 [2024-04-16 12:49:56.344605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.315 [2024-04-16 12:49:56.344620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.315 [2024-04-16 12:49:56.344632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.315 [2024-04-16 12:49:56.344660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.315 qpair failed and we were unable to recover it. 00:21:57.315 [2024-04-16 12:49:56.354498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.315 [2024-04-16 12:49:56.354675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.315 [2024-04-16 12:49:56.354702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.315 [2024-04-16 12:49:56.354717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.315 [2024-04-16 12:49:56.354729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.315 [2024-04-16 12:49:56.354765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.315 qpair failed and we were unable to recover it. 00:21:57.315 [2024-04-16 12:49:56.364462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.315 [2024-04-16 12:49:56.364610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.315 [2024-04-16 12:49:56.364637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.315 [2024-04-16 12:49:56.364652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.315 [2024-04-16 12:49:56.364664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.315 [2024-04-16 12:49:56.364692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.315 qpair failed and we were unable to recover it. 00:21:57.315 [2024-04-16 12:49:56.374483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.315 [2024-04-16 12:49:56.374625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.315 [2024-04-16 12:49:56.374651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.315 [2024-04-16 12:49:56.374666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.315 [2024-04-16 12:49:56.374679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.315 [2024-04-16 12:49:56.374718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.315 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.384526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.384678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.384705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.384719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.384732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.384760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.394562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.394696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.394722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.394737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.394749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.394789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.404592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.404720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.404752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.404767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.404779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.404808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.414590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.414708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.414734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.414749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.414761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.414789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.424664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.424837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.424863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.424878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.424890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.424919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.434660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.434782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.434808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.434823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.434836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.434864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.444694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.444822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.444848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.444863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.444881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.444910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.454721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.454840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.454866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.454881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.454893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.454920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.464753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.464901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.464927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.464942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.574 [2024-04-16 12:49:56.464954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.574 [2024-04-16 12:49:56.464982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.574 qpair failed and we were unable to recover it. 00:21:57.574 [2024-04-16 12:49:56.474787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.574 [2024-04-16 12:49:56.474910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.574 [2024-04-16 12:49:56.474936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.574 [2024-04-16 12:49:56.474951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.474963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.474991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.484827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.484946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.484972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.484987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.484999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.485026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.494826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.494976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.495001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.495015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.495027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.495056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.504898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.505039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.505065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.505079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.505091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.505120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.514955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.515115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.515142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.515157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.515169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.515198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.524923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.525054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.525081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.525096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.525108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.525136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.534916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.535060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.535086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.535101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.535119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.535148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.545052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.545191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.545217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.545232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.545244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.545272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.555006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.555143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.555169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.555184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.555197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.555226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.565090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.565347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.565373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.565388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.565401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.565429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.575066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.575203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.575229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.575244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.575257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.575285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.585162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.585358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.585382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.585397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.585409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.585436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.595143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.595262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.595287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.595301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.595313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.595340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.605162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.575 [2024-04-16 12:49:56.605300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.575 [2024-04-16 12:49:56.605327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.575 [2024-04-16 12:49:56.605342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.575 [2024-04-16 12:49:56.605355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.575 [2024-04-16 12:49:56.605384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.575 qpair failed and we were unable to recover it. 00:21:57.575 [2024-04-16 12:49:56.615190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.576 [2024-04-16 12:49:56.615329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.576 [2024-04-16 12:49:56.615353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.576 [2024-04-16 12:49:56.615368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.576 [2024-04-16 12:49:56.615381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.576 [2024-04-16 12:49:56.615408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.576 qpair failed and we were unable to recover it. 00:21:57.576 [2024-04-16 12:49:56.625253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.576 [2024-04-16 12:49:56.625402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.576 [2024-04-16 12:49:56.625426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.576 [2024-04-16 12:49:56.625440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.576 [2024-04-16 12:49:56.625458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.576 [2024-04-16 12:49:56.625486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.576 qpair failed and we were unable to recover it. 00:21:57.576 [2024-04-16 12:49:56.635233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.576 [2024-04-16 12:49:56.635355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.576 [2024-04-16 12:49:56.635380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.576 [2024-04-16 12:49:56.635394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.576 [2024-04-16 12:49:56.635406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.576 [2024-04-16 12:49:56.635441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.576 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.645220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.645359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.645385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.645400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.645412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.645439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.655243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.655447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.655472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.655487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.655500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.655527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.665338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.665467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.665492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.665506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.665518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.665561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.675346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.675469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.675496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.675511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.675523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.675576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.685367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.685482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.685507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.685522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.685535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.685588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.695386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.695502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.695528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.695558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.695602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.695636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.705442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.705588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.705615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.705631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.705644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.705673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.715416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.715585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.715613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.715634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.715648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.715678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.725451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.725595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.725621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.725637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.725649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.725680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.735608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.735730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.735757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.735773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.735786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.735815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.745530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.745698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.745725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.745740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.745753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.745782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.755523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.755671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.755699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.755715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.835 [2024-04-16 12:49:56.755728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.835 [2024-04-16 12:49:56.755756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.835 qpair failed and we were unable to recover it. 00:21:57.835 [2024-04-16 12:49:56.765605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.835 [2024-04-16 12:49:56.765744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.835 [2024-04-16 12:49:56.765772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.835 [2024-04-16 12:49:56.765787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.765801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.765830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.775642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.775796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.775823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.775839] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.775866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.775894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.785679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.785806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.785833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.785848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.785862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.785905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.795672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.795861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.795887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.795902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.795914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.795942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.805779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.805916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.805942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.805963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.805978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.806005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.815716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.815838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.815880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.815895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.815908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.815936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.825803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.825948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.825974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.825990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.826002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.826030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.835920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.836086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.836113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.836129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.836141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.836178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.845913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.846070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.846097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.846112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.846132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.846160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.855941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.856079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.856106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.856121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.856133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.856172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.865979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.866101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.866127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.866142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.866155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.866183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.875916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.876040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.876066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.876082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.876094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.876121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.885970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.886088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.886114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.886129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.886141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.836 [2024-04-16 12:49:56.886169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.836 qpair failed and we were unable to recover it. 00:21:57.836 [2024-04-16 12:49:56.895976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:57.836 [2024-04-16 12:49:56.896092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:57.836 [2024-04-16 12:49:56.896124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:57.836 [2024-04-16 12:49:56.896140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:57.836 [2024-04-16 12:49:56.896152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:57.837 [2024-04-16 12:49:56.896179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.837 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.906118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.906241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.906273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.906288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.906300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.906328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.916018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.916137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.916164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.916179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.916192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.916219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.926075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.926212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.926238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.926253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.926265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.926293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.936098] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.936214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.936240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.936255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.936267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.936294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.946145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.946271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.946297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.946313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.946326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.946354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.956157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.956281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.956308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.956323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.956336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.956364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.966157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.966282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.966309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.966324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.966337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.966364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.976208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.976338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.976362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.976377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.976390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.976417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.986224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.986349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.986378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.986394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.986406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.986434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:56.996250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:56.996375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:56.996400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:56.996415] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:56.996427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:56.996454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:57.006233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:57.006356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:57.006381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:57.006397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:57.006410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:57.006438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:57.016283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:57.016486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:57.016524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.096 [2024-04-16 12:49:57.016540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.096 [2024-04-16 12:49:57.016578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.096 [2024-04-16 12:49:57.016609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.096 qpair failed and we were unable to recover it. 00:21:58.096 [2024-04-16 12:49:57.026317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.096 [2024-04-16 12:49:57.026441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.096 [2024-04-16 12:49:57.026465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.026480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.026493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.026526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.036400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.036530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.036581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.036598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.036610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.036639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.046440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.046581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.046607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.046622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.046636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.046665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.056408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.056527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.056576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.056593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.056606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.056636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.066482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.066703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.066730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.066746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.066759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.066789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.076508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.076658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.076689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.076706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.076720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.076748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.086489] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.086704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.086732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.086747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.086760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.086790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.096512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.096658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.096685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.096701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.096714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.096742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.106594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.106754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.106779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.106794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.106807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.106836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.116656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.116800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.116826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.116841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.116854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.116905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.126705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.126911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.126935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.126950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.126963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.126991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.136650] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.136775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.136802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.136817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.136830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.136875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.146692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.146829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.146857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.146873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.146885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.097 [2024-04-16 12:49:57.146929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.097 qpair failed and we were unable to recover it. 00:21:58.097 [2024-04-16 12:49:57.156676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.097 [2024-04-16 12:49:57.156800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.097 [2024-04-16 12:49:57.156827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.097 [2024-04-16 12:49:57.156858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.097 [2024-04-16 12:49:57.156871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.098 [2024-04-16 12:49:57.156899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.098 qpair failed and we were unable to recover it. 00:21:58.356 [2024-04-16 12:49:57.166786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.356 [2024-04-16 12:49:57.166924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.356 [2024-04-16 12:49:57.166956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.356 [2024-04-16 12:49:57.166980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.166993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.167021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.176731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.176860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.176888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.176903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.176926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.176954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.186826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.186990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.187016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.187031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.187043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.187071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.196799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.196959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.196986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.197001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.197013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.197041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.206828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.206958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.206985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.207001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.207018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.207046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.216907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.217063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.217089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.217104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.217117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.217144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.226990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.227115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.227140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.227155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.227167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.227194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.236910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.237046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.237072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.237087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.237099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.237127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.247010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.247125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.247151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.247166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.247178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.247205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.256966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.257096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.257122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.257137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.257149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.257177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.267004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.267129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.267156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.267170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.267183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.267210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.277047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.277165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.277191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.277206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.277219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.277246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.287052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.287172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.287197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.287212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.287224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.287252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.297117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.357 [2024-04-16 12:49:57.297237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.357 [2024-04-16 12:49:57.297264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.357 [2024-04-16 12:49:57.297279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.357 [2024-04-16 12:49:57.297297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.357 [2024-04-16 12:49:57.297325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.357 qpair failed and we were unable to recover it. 00:21:58.357 [2024-04-16 12:49:57.307122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.307249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.307275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.307290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.307302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.307330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.317164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.317283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.317310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.317325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.317337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.317364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.327139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.327269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.327295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.327310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.327323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.327351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.337219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.337334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.337359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.337373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.337386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.337414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.347205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.347338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.347364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.347379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.347392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.347419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.357233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.357352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.357379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.357394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.357406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.357433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.367262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.367386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.367412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.367427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.367440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.367467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.377291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.377434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.377460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.377475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.377487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.377515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.387363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.387482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.387508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.387523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.387555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.387603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.397363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.397484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.397510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.397525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.397538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.397589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.407392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.407523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.407549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.407589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.407610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.407642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.358 [2024-04-16 12:49:57.417423] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.358 [2024-04-16 12:49:57.417547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.358 [2024-04-16 12:49:57.417585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.358 [2024-04-16 12:49:57.417605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.358 [2024-04-16 12:49:57.417618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.358 [2024-04-16 12:49:57.417647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.358 qpair failed and we were unable to recover it. 00:21:58.617 [2024-04-16 12:49:57.427468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.617 [2024-04-16 12:49:57.427621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.617 [2024-04-16 12:49:57.427648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.617 [2024-04-16 12:49:57.427664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.427683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.427712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.437473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.437628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.437655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.437671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.437684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.437721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.447505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.447649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.447677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.447693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.447706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.447734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.457568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.457715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.457743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.457758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.457771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.457800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.467539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.467687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.467715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.467730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.467743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.467772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.477622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.477747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.477774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.477795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.477809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.477838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.487766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.487916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.487942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.487957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.487976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.488004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.497636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.497757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.497784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.497799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.497812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.497841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.507758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.507908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.507934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.507950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.507962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.508000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.517768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.517908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.517934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.517949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.517962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.517998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.527727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.527873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.527899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.527914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.527927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.527954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.537780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.537916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.537942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.537958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.537970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.537997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.547892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.548036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.548062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.548077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.618 [2024-04-16 12:49:57.548091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.618 [2024-04-16 12:49:57.548118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.618 qpair failed and we were unable to recover it. 00:21:58.618 [2024-04-16 12:49:57.557837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.618 [2024-04-16 12:49:57.558009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.618 [2024-04-16 12:49:57.558034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.618 [2024-04-16 12:49:57.558049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.558061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.558088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.567919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.568042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.568067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.568087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.568101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.568128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.577895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.578017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.578043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.578059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.578071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.578100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.587917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.588087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.588113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.588128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.588150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.588178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.597943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.598070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.598096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.598111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.598124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.598151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.607975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.608106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.608132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.608147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.608159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.608187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.618022] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.618154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.618180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.618195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.618207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.618246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.628156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.628297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.628323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.628337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.628349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.628376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.638082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.638203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.638227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.638241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.638254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.638281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.648157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.648302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.648329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.648345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.648357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.648392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.658212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.658370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.658397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.658417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.658434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.658461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.668171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.668301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.668327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.668341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.668354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.668392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.619 [2024-04-16 12:49:57.678183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.619 [2024-04-16 12:49:57.678310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.619 [2024-04-16 12:49:57.678336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.619 [2024-04-16 12:49:57.678351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.619 [2024-04-16 12:49:57.678364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.619 [2024-04-16 12:49:57.678391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.619 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.688264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.688418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.688445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.688460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.688473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.688512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.698223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.698344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.698370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.698384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.698397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.698424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.708274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.708407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.708433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.708448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.708460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.708496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.718283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.718408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.718434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.718449] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.718462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.718489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.728376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.728502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.728529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.728559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.728597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.728627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.738394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.738512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.738538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.738579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.738593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.738633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.748392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.748522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.748577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.748596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.748609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.748641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.758456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.758591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.758618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.758634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.758646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.758675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.768483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.768634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.768661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.768677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.768690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.768724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.778455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.778592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.778620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.778636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.778649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.778690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.788506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.788664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.788692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.788713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.788726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.788760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.798485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.798628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.798655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.798671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.798684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.798713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.808588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.808729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.808756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.808772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.808786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.808815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.818600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.818725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.818752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.818768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.818780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.818809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.828667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.828794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.828821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.828851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.828876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.828904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.877 [2024-04-16 12:49:57.838645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.877 [2024-04-16 12:49:57.838771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.877 [2024-04-16 12:49:57.838804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.877 [2024-04-16 12:49:57.838821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.877 [2024-04-16 12:49:57.838834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.877 [2024-04-16 12:49:57.838888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.877 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.848688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.848812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.848839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.848869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.848881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.848910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.858714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.858865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.858892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.858907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.858919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.858955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.868751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.868892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.868917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.868932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.868944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.868972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.878754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.878938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.878965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.878986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.878999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.879034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.888760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.888882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.888909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.888924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.888937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.888965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.898792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.898920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.898946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.898961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.898973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.899001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.908824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.908985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.909011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.909026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.909039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.909077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.918852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.919000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.919027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.919042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.919054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.919092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.928929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.929062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.929094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.929110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.929122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.929150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:58.878 [2024-04-16 12:49:57.938922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:58.878 [2024-04-16 12:49:57.939041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:58.878 [2024-04-16 12:49:57.939067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:58.878 [2024-04-16 12:49:57.939082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:58.878 [2024-04-16 12:49:57.939095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:58.878 [2024-04-16 12:49:57.939122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:58.878 qpair failed and we were unable to recover it. 00:21:59.135 [2024-04-16 12:49:57.948934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.135 [2024-04-16 12:49:57.949076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.135 [2024-04-16 12:49:57.949103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.135 [2024-04-16 12:49:57.949117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.135 [2024-04-16 12:49:57.949130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.135 [2024-04-16 12:49:57.949166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.135 qpair failed and we were unable to recover it. 00:21:59.135 [2024-04-16 12:49:57.959067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:57.959209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:57.959233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:57.959247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:57.959259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:57.959286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:57.969015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:57.969142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:57.969168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:57.969183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:57.969195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:57.969227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:57.979123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:57.979239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:57.979265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:57.979280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:57.979292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:57.979319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:57.989041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:57.989224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:57.989251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:57.989266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:57.989278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:57.989306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:57.999069] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:57.999224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:57.999249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:57.999264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:57.999276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:57.999304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:58.009166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:58.009308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:58.009334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:58.009349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:58.009362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:58.009390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:58.019176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:58.019296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:58.019327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:58.019343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:58.019355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:58.019382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:58.029194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:58.029384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:58.029409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:58.029424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:58.029438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:58.029466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:58.039185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:58.039306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:58.039330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:58.039345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:58.039357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:58.039384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:58.049299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:58.049422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:58.049447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:58.049462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:58.049475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:58.049505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:58.059248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:58.059433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:58.059458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:58.059472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:58.059491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:58.059520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:58.069308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:58.069463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:58.069502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.136 [2024-04-16 12:49:58.069518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.136 [2024-04-16 12:49:58.069531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.136 [2024-04-16 12:49:58.069583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.136 qpair failed and we were unable to recover it. 00:21:59.136 [2024-04-16 12:49:58.079313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.136 [2024-04-16 12:49:58.079454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.136 [2024-04-16 12:49:58.079480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.079495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.079512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.079540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.089350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.089470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.089495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.089510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.089523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.089575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.099333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.099457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.099481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.099496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.099509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.099537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.109432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.109580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.109606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.109621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.109633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.109662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.119412] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.119550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.119610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.119627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.119640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.119669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.129419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.129539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.129592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.129608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.129621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.129650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.139476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.139622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.139649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.139674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.139687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.139716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.149511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.149682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.149708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.149723] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.149741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.149771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.159525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.159701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.159728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.159744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.159757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.159786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.169529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.169662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.169688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.169703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.169715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.169743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.179557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.179750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.179775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.179790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.179802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.179830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.189648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.189775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.189802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.189818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.189830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.189882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.137 [2024-04-16 12:49:58.199670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.137 [2024-04-16 12:49:58.199829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.137 [2024-04-16 12:49:58.199857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.137 [2024-04-16 12:49:58.199873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.137 [2024-04-16 12:49:58.199901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.137 [2024-04-16 12:49:58.199931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.137 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.209708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.209837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.209863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.209893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.209906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.209934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.219755] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.219893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.219919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.219935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.219952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.219979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.229786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.229924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.229950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.229964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.229980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.230008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.239791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.239934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.239960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.239981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.239994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.240022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.249776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.249903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.249930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.249945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.249957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.249985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.259905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.260035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.260060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.260075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.260087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.260115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.269867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.269993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.270017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.270032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.270045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.270073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.279936] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.280062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.280087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.280101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.280113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.280141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.289894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.290018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.290042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.290057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.290070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.290097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.300004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.396 [2024-04-16 12:49:58.300136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.396 [2024-04-16 12:49:58.300160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.396 [2024-04-16 12:49:58.300175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.396 [2024-04-16 12:49:58.300187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.396 [2024-04-16 12:49:58.300215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.396 qpair failed and we were unable to recover it. 00:21:59.396 [2024-04-16 12:49:58.309997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.310117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.310143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.310158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.310171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.310208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.319997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.320166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.320193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.320208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.320221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.320249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.330015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.330133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.330157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.330177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.330190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.330218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.340048] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.340161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.340185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.340200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.340213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.340240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.350111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.350248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.350272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.350288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.350300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.350328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.360074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.360207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.360231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.360247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.360259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.360286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.370174] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.370303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.370328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.370343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.370356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.370390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.380187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.380334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.380359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.380373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.380386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.380413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.390295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.390469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.390508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.390523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.390536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.390573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.400174] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.400294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.400319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.400334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.400346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.400375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.410280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.410414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.410438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.410452] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.410464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.410492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.420277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.420403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.420427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.420448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.420461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.420488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.430369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.430503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.430527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.430542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.430587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.430617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.440341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.440479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.440503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.440517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.440529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.440581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.450367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.450502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.450526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.450540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.450553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.450606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.397 [2024-04-16 12:49:58.460445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.397 [2024-04-16 12:49:58.460671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.397 [2024-04-16 12:49:58.460699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.397 [2024-04-16 12:49:58.460715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.397 [2024-04-16 12:49:58.460728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.397 [2024-04-16 12:49:58.460758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.397 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.470428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.470617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.470643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.470658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.470671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.470701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.480462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.480694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.480720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.480735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.480749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.480778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.490497] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.490628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.490653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.490669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.490682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.490710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.500496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.500668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.500694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.500709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.500721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.500750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.510540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.510700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.510733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.510750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.510763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.510792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.520518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.520679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.520706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.520722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.520735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.520763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.530591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.530729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.530755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.530770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.530783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.530812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.540603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.540753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.540780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.540795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.540809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.540837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.550640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.550771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.550797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.550813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.550826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.550874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.560636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.560759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.560786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.560802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.560814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.560842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.570676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.570806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.570833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.570848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.570860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.570904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.580742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.580970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.580996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.581012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.581025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.581054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.590754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.590880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.590907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.590922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.590935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.590978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.600776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.600956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.600987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.601002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.601015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.601043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.610831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.610963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.610989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.611004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.611017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.611044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.620967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.621087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.621113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.621128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.621140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.621167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.630913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.631034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.631060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.631075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.631088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.631115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.640890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.641030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.641054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.641068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.641089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.641126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.650938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.651054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.651079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.651094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.651106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.651134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.660925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.661101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.661127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.661142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.661154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.661181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.670977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.671101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.671127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.671142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.671154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.656 [2024-04-16 12:49:58.671182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.656 qpair failed and we were unable to recover it. 00:21:59.656 [2024-04-16 12:49:58.681066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.656 [2024-04-16 12:49:58.681190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.656 [2024-04-16 12:49:58.681215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.656 [2024-04-16 12:49:58.681230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.656 [2024-04-16 12:49:58.681242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.657 [2024-04-16 12:49:58.681269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.657 qpair failed and we were unable to recover it. 00:21:59.657 [2024-04-16 12:49:58.691050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.657 [2024-04-16 12:49:58.691169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.657 [2024-04-16 12:49:58.691198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.657 [2024-04-16 12:49:58.691214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.657 [2024-04-16 12:49:58.691226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.657 [2024-04-16 12:49:58.691253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.657 qpair failed and we were unable to recover it. 00:21:59.657 [2024-04-16 12:49:58.701125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.657 [2024-04-16 12:49:58.701253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.657 [2024-04-16 12:49:58.701276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.657 [2024-04-16 12:49:58.701291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.657 [2024-04-16 12:49:58.701303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.657 [2024-04-16 12:49:58.701329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.657 qpair failed and we were unable to recover it. 00:21:59.657 [2024-04-16 12:49:58.711121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.657 [2024-04-16 12:49:58.711245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.657 [2024-04-16 12:49:58.711270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.657 [2024-04-16 12:49:58.711284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.657 [2024-04-16 12:49:58.711297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.657 [2024-04-16 12:49:58.711323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.657 qpair failed and we were unable to recover it. 00:21:59.657 [2024-04-16 12:49:58.721100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.657 [2024-04-16 12:49:58.721264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.657 [2024-04-16 12:49:58.721289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.657 [2024-04-16 12:49:58.721303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.657 [2024-04-16 12:49:58.721315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.657 [2024-04-16 12:49:58.721343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.657 qpair failed and we were unable to recover it. 00:21:59.915 [2024-04-16 12:49:58.731144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.915 [2024-04-16 12:49:58.731304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.915 [2024-04-16 12:49:58.731328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.915 [2024-04-16 12:49:58.731342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.915 [2024-04-16 12:49:58.731354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.915 [2024-04-16 12:49:58.731387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.915 qpair failed and we were unable to recover it. 00:21:59.915 [2024-04-16 12:49:58.741164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.915 [2024-04-16 12:49:58.741284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.915 [2024-04-16 12:49:58.741310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.915 [2024-04-16 12:49:58.741324] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.915 [2024-04-16 12:49:58.741337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.915 [2024-04-16 12:49:58.741364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.915 qpair failed and we were unable to recover it. 00:21:59.915 [2024-04-16 12:49:58.751208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.915 [2024-04-16 12:49:58.751328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.915 [2024-04-16 12:49:58.751353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.915 [2024-04-16 12:49:58.751369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.915 [2024-04-16 12:49:58.751381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.915 [2024-04-16 12:49:58.751408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.915 qpair failed and we were unable to recover it. 00:21:59.915 [2024-04-16 12:49:58.761248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.915 [2024-04-16 12:49:58.761372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.915 [2024-04-16 12:49:58.761396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.915 [2024-04-16 12:49:58.761411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.915 [2024-04-16 12:49:58.761423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.915 [2024-04-16 12:49:58.761450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.915 qpair failed and we were unable to recover it. 00:21:59.915 [2024-04-16 12:49:58.771263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.915 [2024-04-16 12:49:58.771428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.915 [2024-04-16 12:49:58.771465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.915 [2024-04-16 12:49:58.771480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.915 [2024-04-16 12:49:58.771493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.915 [2024-04-16 12:49:58.771521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.915 qpair failed and we were unable to recover it. 00:21:59.915 [2024-04-16 12:49:58.781277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.915 [2024-04-16 12:49:58.781391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.915 [2024-04-16 12:49:58.781421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.915 [2024-04-16 12:49:58.781436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.915 [2024-04-16 12:49:58.781448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.915 [2024-04-16 12:49:58.781475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.915 qpair failed and we were unable to recover it. 00:21:59.915 [2024-04-16 12:49:58.791363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.915 [2024-04-16 12:49:58.791496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.915 [2024-04-16 12:49:58.791522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.915 [2024-04-16 12:49:58.791537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.915 [2024-04-16 12:49:58.791576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.915 [2024-04-16 12:49:58.791607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.915 qpair failed and we were unable to recover it. 00:21:59.915 [2024-04-16 12:49:58.801369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.801492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.801516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.801530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.801558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.801598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.811391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.811512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.811536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.811574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.811589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.811619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.821400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.821519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.821543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.821557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.821600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.821637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.831538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.831688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.831712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.831728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.831741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.831770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.841465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.841606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.841633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.841648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.841664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.841692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.851619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.851754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.851779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.851794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.851807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.851836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.861603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.861768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.861793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.861808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.861820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.861848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.871642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.871785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.871809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.871824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.871837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.871866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.881622] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.881776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.881801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.881816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.881828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.881857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.891639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.891846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.891887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.891903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.891917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.891945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.901669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.901810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.901835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.901864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.901877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.901905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.911680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.911809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.911837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.911852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.911885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.911914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.921710] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.921891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.921917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.921932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.921944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.921972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.931731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.931885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.931911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.931926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.931938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.931967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.941792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.941929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.941955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.941971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.941983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.942011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.951795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.951936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.951967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.951982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.951995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.952023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.961821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.961996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.962033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.962049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.962062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.962090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.971922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.972077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.972102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.972117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.972130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.972157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:21:59.916 [2024-04-16 12:49:58.981900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:59.916 [2024-04-16 12:49:58.982018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:59.916 [2024-04-16 12:49:58.982044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:59.916 [2024-04-16 12:49:58.982060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:59.916 [2024-04-16 12:49:58.982072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:21:59.916 [2024-04-16 12:49:58.982100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:59.916 qpair failed and we were unable to recover it. 00:22:00.175 [2024-04-16 12:49:58.991927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.175 [2024-04-16 12:49:58.992050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.175 [2024-04-16 12:49:58.992077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.175 [2024-04-16 12:49:58.992091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.175 [2024-04-16 12:49:58.992103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.175 [2024-04-16 12:49:58.992131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.175 qpair failed and we were unable to recover it. 00:22:00.175 [2024-04-16 12:49:59.001925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.175 [2024-04-16 12:49:59.002051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.175 [2024-04-16 12:49:59.002077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.175 [2024-04-16 12:49:59.002097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.175 [2024-04-16 12:49:59.002110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.175 [2024-04-16 12:49:59.002137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.175 qpair failed and we were unable to recover it. 00:22:00.175 [2024-04-16 12:49:59.012055] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.175 [2024-04-16 12:49:59.012190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.175 [2024-04-16 12:49:59.012215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.175 [2024-04-16 12:49:59.012229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.175 [2024-04-16 12:49:59.012242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.175 [2024-04-16 12:49:59.012270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.175 qpair failed and we were unable to recover it. 00:22:00.175 [2024-04-16 12:49:59.022002] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.175 [2024-04-16 12:49:59.022114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.175 [2024-04-16 12:49:59.022140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.175 [2024-04-16 12:49:59.022155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.175 [2024-04-16 12:49:59.022167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.175 [2024-04-16 12:49:59.022194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.175 qpair failed and we were unable to recover it. 00:22:00.175 [2024-04-16 12:49:59.032023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.175 [2024-04-16 12:49:59.032182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.175 [2024-04-16 12:49:59.032207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.175 [2024-04-16 12:49:59.032222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.175 [2024-04-16 12:49:59.032234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.175 [2024-04-16 12:49:59.032261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.175 qpair failed and we were unable to recover it. 00:22:00.175 [2024-04-16 12:49:59.042019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.175 [2024-04-16 12:49:59.042141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.175 [2024-04-16 12:49:59.042167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.175 [2024-04-16 12:49:59.042182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.175 [2024-04-16 12:49:59.042194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.175 [2024-04-16 12:49:59.042221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.175 qpair failed and we were unable to recover it. 00:22:00.175 [2024-04-16 12:49:59.052062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.175 [2024-04-16 12:49:59.052231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.175 [2024-04-16 12:49:59.052256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.175 [2024-04-16 12:49:59.052271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.175 [2024-04-16 12:49:59.052283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.052309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.062097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.062232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.062257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.062272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.062283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.062310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.072140] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.072269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.072294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.072309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.072321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.072348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.082169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.082327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.082352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.082366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.082379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.082406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.092165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.092284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.092308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.092328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.092342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.092369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.102231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.102364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.102389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.102404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.102416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.102444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.112222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.112389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.112415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.112430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.112442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.112469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.122228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.122359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.122385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.122400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.122412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.122440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.132319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.132480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.132505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.132520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.132533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.132588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.142290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.142404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.142430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.142445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.142458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.142485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.152363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.152487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.152513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.152528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.152540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.152592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.162343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.162466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.162492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.162506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.162519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.162561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.172376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.172497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.172523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.172538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.172574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.172605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.176 qpair failed and we were unable to recover it. 00:22:00.176 [2024-04-16 12:49:59.182431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.176 [2024-04-16 12:49:59.182684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.176 [2024-04-16 12:49:59.182712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.176 [2024-04-16 12:49:59.182733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.176 [2024-04-16 12:49:59.182747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.176 [2024-04-16 12:49:59.182776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.177 qpair failed and we were unable to recover it. 00:22:00.177 [2024-04-16 12:49:59.192526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.177 [2024-04-16 12:49:59.192676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.177 [2024-04-16 12:49:59.192703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.177 [2024-04-16 12:49:59.192718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.177 [2024-04-16 12:49:59.192731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.177 [2024-04-16 12:49:59.192759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.177 qpair failed and we were unable to recover it. 00:22:00.177 [2024-04-16 12:49:59.202455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.177 [2024-04-16 12:49:59.202597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.177 [2024-04-16 12:49:59.202624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.177 [2024-04-16 12:49:59.202639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.177 [2024-04-16 12:49:59.202652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.177 [2024-04-16 12:49:59.202680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.177 qpair failed and we were unable to recover it. 00:22:00.177 [2024-04-16 12:49:59.212498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.177 [2024-04-16 12:49:59.212638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.177 [2024-04-16 12:49:59.212665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.177 [2024-04-16 12:49:59.212681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.177 [2024-04-16 12:49:59.212693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.177 [2024-04-16 12:49:59.212721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.177 qpair failed and we were unable to recover it. 00:22:00.177 [2024-04-16 12:49:59.222520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.177 [2024-04-16 12:49:59.222663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.177 [2024-04-16 12:49:59.222689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.177 [2024-04-16 12:49:59.222705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.177 [2024-04-16 12:49:59.222717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.177 [2024-04-16 12:49:59.222746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.177 qpair failed and we were unable to recover it. 00:22:00.177 [2024-04-16 12:49:59.232601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.177 [2024-04-16 12:49:59.232774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.177 [2024-04-16 12:49:59.232800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.177 [2024-04-16 12:49:59.232816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.177 [2024-04-16 12:49:59.232829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.177 [2024-04-16 12:49:59.232871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.177 qpair failed and we were unable to recover it. 00:22:00.177 [2024-04-16 12:49:59.242606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.177 [2024-04-16 12:49:59.242733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.177 [2024-04-16 12:49:59.242759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.177 [2024-04-16 12:49:59.242774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.177 [2024-04-16 12:49:59.242787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.177 [2024-04-16 12:49:59.242815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.177 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.252660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.252819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.252845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.252876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.436 [2024-04-16 12:49:59.252888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.436 [2024-04-16 12:49:59.252916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.436 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.262641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.262772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.262799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.262814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.436 [2024-04-16 12:49:59.262826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.436 [2024-04-16 12:49:59.262855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.436 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.272699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.272825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.272856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.272888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.436 [2024-04-16 12:49:59.272901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.436 [2024-04-16 12:49:59.272928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.436 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.282704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.282832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.282858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.282888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.436 [2024-04-16 12:49:59.282901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.436 [2024-04-16 12:49:59.282929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.436 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.292738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.292879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.292904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.292919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.436 [2024-04-16 12:49:59.292932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.436 [2024-04-16 12:49:59.292959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.436 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.302774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.302931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.302956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.302971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.436 [2024-04-16 12:49:59.302983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.436 [2024-04-16 12:49:59.303010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.436 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.312824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.312980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.313005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.313020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.436 [2024-04-16 12:49:59.313032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.436 [2024-04-16 12:49:59.313059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.436 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.322834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.322998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.323023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.323038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.436 [2024-04-16 12:49:59.323051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.436 [2024-04-16 12:49:59.323078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.436 qpair failed and we were unable to recover it. 00:22:00.436 [2024-04-16 12:49:59.332909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.436 [2024-04-16 12:49:59.333029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.436 [2024-04-16 12:49:59.333054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.436 [2024-04-16 12:49:59.333069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.333081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.333109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.342958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.343083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.343108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.343122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.343135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.343162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.352930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.353084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.353110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.353125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.353138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.353166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.362943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.363064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.363094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.363110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.363122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.363149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.372992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.373159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.373184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.373199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.373211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.373238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.383009] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.383123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.383149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.383164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.383175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.383202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.393013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.393157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.393183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.393198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.393210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.393237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.403115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.403234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.403259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.403273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.403286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.403320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.413057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.413237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.413263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.413278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.413290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.413318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.423093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.423232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.423258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.423273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.423285] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.423312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.433135] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.433270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.433296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.433312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.433324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.433353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.443153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.443271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.443296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.443310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.443322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.443349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.453169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.453299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.453330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.453345] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.453357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.453385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.463279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.437 [2024-04-16 12:49:59.463399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.437 [2024-04-16 12:49:59.463424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.437 [2024-04-16 12:49:59.463439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.437 [2024-04-16 12:49:59.463451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.437 [2024-04-16 12:49:59.463478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.437 qpair failed and we were unable to recover it. 00:22:00.437 [2024-04-16 12:49:59.473247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.438 [2024-04-16 12:49:59.473401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.438 [2024-04-16 12:49:59.473427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.438 [2024-04-16 12:49:59.473442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.438 [2024-04-16 12:49:59.473454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.438 [2024-04-16 12:49:59.473481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.438 qpair failed and we were unable to recover it. 00:22:00.438 [2024-04-16 12:49:59.483253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.438 [2024-04-16 12:49:59.483373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.438 [2024-04-16 12:49:59.483399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.438 [2024-04-16 12:49:59.483414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.438 [2024-04-16 12:49:59.483426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.438 [2024-04-16 12:49:59.483453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.438 qpair failed and we were unable to recover it. 00:22:00.438 [2024-04-16 12:49:59.493310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.438 [2024-04-16 12:49:59.493432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.438 [2024-04-16 12:49:59.493457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.438 [2024-04-16 12:49:59.493472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.438 [2024-04-16 12:49:59.493485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.438 [2024-04-16 12:49:59.493517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.438 qpair failed and we were unable to recover it. 00:22:00.438 [2024-04-16 12:49:59.503324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.438 [2024-04-16 12:49:59.503444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.438 [2024-04-16 12:49:59.503470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.438 [2024-04-16 12:49:59.503485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.438 [2024-04-16 12:49:59.503497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.438 [2024-04-16 12:49:59.503525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.438 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.513354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.513488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.513513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.513528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.513540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.513594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.523390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.523503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.523529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.523558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.523581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.523610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.533411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.533589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.533616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.533632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.533645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.533673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.543421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.543545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.543608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.543627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.543639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.543669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.553534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.553728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.553755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.553771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.553783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.553811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.563529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.563696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.563722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.563738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.563750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.563779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.573509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.573653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.573680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.573696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.573709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.573737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.583528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.583682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.583708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.583724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.583742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.583772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.593636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.593785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.593812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.593827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.593840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.593883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.603630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.603760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.603787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.603803] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.603815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.603858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.613690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.613865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.613890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.613905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.613917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.613945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.623680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.697 [2024-04-16 12:49:59.623818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.697 [2024-04-16 12:49:59.623844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.697 [2024-04-16 12:49:59.623875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.697 [2024-04-16 12:49:59.623888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.697 [2024-04-16 12:49:59.623915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.697 qpair failed and we were unable to recover it. 00:22:00.697 [2024-04-16 12:49:59.633702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.633858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.633899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.633914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.633926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.633954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.643728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.643877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.643902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.643916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.643928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.643956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.653781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.653953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.653979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.653994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.654006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.654033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.663894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.664022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.664048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.664063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.664075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.664103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.673889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.674034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.674060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.674075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.674092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.674121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.683902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.684022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.684048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.684063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.684075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.684102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.693881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.694008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.694034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.694049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.694061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.694105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.703914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.704040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.704065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.704080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.704092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.704130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.713966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.714094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.714119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.714134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.714146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.714172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.723973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.724102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.724127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.724143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.724154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.724181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.734072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.734193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.734219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.734233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.734246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.734274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.744064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.744179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.744204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.744219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.744230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.744258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.754080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.754205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.698 [2024-04-16 12:49:59.754231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.698 [2024-04-16 12:49:59.754246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.698 [2024-04-16 12:49:59.754259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.698 [2024-04-16 12:49:59.754286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.698 qpair failed and we were unable to recover it. 00:22:00.698 [2024-04-16 12:49:59.764201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.698 [2024-04-16 12:49:59.764335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.699 [2024-04-16 12:49:59.764362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.699 [2024-04-16 12:49:59.764377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.699 [2024-04-16 12:49:59.764396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.699 [2024-04-16 12:49:59.764425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.699 qpair failed and we were unable to recover it. 00:22:00.957 [2024-04-16 12:49:59.774156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.957 [2024-04-16 12:49:59.774302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.957 [2024-04-16 12:49:59.774328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.957 [2024-04-16 12:49:59.774342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.957 [2024-04-16 12:49:59.774354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.957 [2024-04-16 12:49:59.774382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.957 qpair failed and we were unable to recover it. 00:22:00.957 [2024-04-16 12:49:59.784136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.957 [2024-04-16 12:49:59.784261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.957 [2024-04-16 12:49:59.784287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.957 [2024-04-16 12:49:59.784302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.957 [2024-04-16 12:49:59.784314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.957 [2024-04-16 12:49:59.784341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.957 qpair failed and we were unable to recover it. 00:22:00.957 [2024-04-16 12:49:59.794218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.957 [2024-04-16 12:49:59.794354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.957 [2024-04-16 12:49:59.794380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.957 [2024-04-16 12:49:59.794395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.957 [2024-04-16 12:49:59.794413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.957 [2024-04-16 12:49:59.794440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.957 qpair failed and we were unable to recover it. 00:22:00.957 [2024-04-16 12:49:59.804240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.804363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.804389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.804403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.804415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.804443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.814254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.814379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.814404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.814419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.814432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.814459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.824333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.824446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.824472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.824487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.824499] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.824525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.834323] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.834445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.834469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.834484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.834496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.834524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.844360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.844481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.844505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.844519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.844531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.844584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.854415] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.854530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.854579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.854601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.854614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.854644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.864420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.864540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.864587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.864608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.864620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.864650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.874542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.874701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.874726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.874740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.874752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.874781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.884469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.884675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.884702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.884719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.884732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.884762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.894518] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.894731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.894755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.894770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.894784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.894814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.904502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.904650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.904675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.904691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.904704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.904733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.914584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.914709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.914733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.914748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.914761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.914790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.924586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.924721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.924745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.924760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.924772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.924801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.934612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.934735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.934760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.934775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.934789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.934818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.944639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.944759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.944784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.944805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.944818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.944847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.954663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.954788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.954813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.954828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.954842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.954886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.964832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.965020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.965044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.965059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.965071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.965100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.974712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.974839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.974878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.974892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.974904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.974932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.984765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.984904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.984929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.984944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.984956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.984984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:49:59.994820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:49:59.995012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:49:59.995038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:49:59.995054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:49:59.995067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:49:59.995095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:50:00.004897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:50:00.005039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:50:00.005068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:50:00.005084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:50:00.005097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:50:00.005131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:50:00.014872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:50:00.014999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:50:00.015026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:50:00.015042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:50:00.015055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:50:00.015084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:00.958 [2024-04-16 12:50:00.024993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:00.958 [2024-04-16 12:50:00.025144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:00.958 [2024-04-16 12:50:00.025172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:00.958 [2024-04-16 12:50:00.025188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:00.958 [2024-04-16 12:50:00.025201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:00.958 [2024-04-16 12:50:00.025234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:00.958 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.034943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.035078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.035105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.035129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.035143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.035173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.044922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.045109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.045136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.045151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.045165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.045193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.054992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.055118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.055144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.055159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.055171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.055199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.065039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.065205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.065233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.065248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.065261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.065291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.075011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.075154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.075181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.075196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.075208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.075238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.085052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.085194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.085221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.085237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.085254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.085292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.095117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.095271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.095298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.095314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.095326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.095355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.105136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.105255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.105281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.105297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.105309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.105338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.115138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.115284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.115311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.115331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.115343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.115372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.125196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.125353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.125385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.125401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.125413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.125442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.135231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.218 [2024-04-16 12:50:00.135417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.218 [2024-04-16 12:50:00.135444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.218 [2024-04-16 12:50:00.135460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.218 [2024-04-16 12:50:00.135472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.218 [2024-04-16 12:50:00.135501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.218 qpair failed and we were unable to recover it. 00:22:01.218 [2024-04-16 12:50:00.145198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.145335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.145363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.145378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.145391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.145419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.155225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.155382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.155409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.155424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.155436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.155465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.165315] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.165476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.165503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.165518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.165531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.165577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.175276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.175436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.175462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.175477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.175489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.175518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.185309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.185447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.185484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.185499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.185511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.185539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.195333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.195473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.195498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.195513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.195525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.195552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.205378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.205497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.205523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.205538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.205550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.205587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.215448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.215613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.215646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.215662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.215674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.215704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.225535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.225685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.225716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.225733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.225745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.225774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.235510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.235645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.235672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.235687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.235700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.235730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.245505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.245707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.245735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.245751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.245764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.245793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.255521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.255679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.255707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.255722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.255734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.255768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.265561] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.265701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.265727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.219 [2024-04-16 12:50:00.265743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.219 [2024-04-16 12:50:00.265755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.219 [2024-04-16 12:50:00.265785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.219 qpair failed and we were unable to recover it. 00:22:01.219 [2024-04-16 12:50:00.275630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.219 [2024-04-16 12:50:00.275758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.219 [2024-04-16 12:50:00.275784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.220 [2024-04-16 12:50:00.275804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.220 [2024-04-16 12:50:00.275816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.220 [2024-04-16 12:50:00.275846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.220 qpair failed and we were unable to recover it. 00:22:01.479 [2024-04-16 12:50:00.285721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.479 [2024-04-16 12:50:00.285847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.479 [2024-04-16 12:50:00.285874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.479 [2024-04-16 12:50:00.285889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.479 [2024-04-16 12:50:00.285902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.479 [2024-04-16 12:50:00.285929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.479 qpair failed and we were unable to recover it. 00:22:01.479 [2024-04-16 12:50:00.295632] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.479 [2024-04-16 12:50:00.295753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.479 [2024-04-16 12:50:00.295780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.479 [2024-04-16 12:50:00.295795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.479 [2024-04-16 12:50:00.295807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.479 [2024-04-16 12:50:00.295835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.479 qpair failed and we were unable to recover it. 00:22:01.479 [2024-04-16 12:50:00.305723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.479 [2024-04-16 12:50:00.305884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.479 [2024-04-16 12:50:00.305916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.479 [2024-04-16 12:50:00.305932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.479 [2024-04-16 12:50:00.305945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.479 [2024-04-16 12:50:00.305974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.479 qpair failed and we were unable to recover it. 00:22:01.479 [2024-04-16 12:50:00.315705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.479 [2024-04-16 12:50:00.315829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.479 [2024-04-16 12:50:00.315856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.479 [2024-04-16 12:50:00.315871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.479 [2024-04-16 12:50:00.315883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.479 [2024-04-16 12:50:00.315910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.479 qpair failed and we were unable to recover it. 00:22:01.479 [2024-04-16 12:50:00.325705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.479 [2024-04-16 12:50:00.325841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.479 [2024-04-16 12:50:00.325868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.479 [2024-04-16 12:50:00.325883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.479 [2024-04-16 12:50:00.325895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.479 [2024-04-16 12:50:00.325923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.479 qpair failed and we were unable to recover it. 00:22:01.479 [2024-04-16 12:50:00.335757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.479 [2024-04-16 12:50:00.335920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.479 [2024-04-16 12:50:00.335947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.479 [2024-04-16 12:50:00.335962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.335974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.336002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.345793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.345913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.345940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.345955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.345967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.346001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.355851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.356026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.356052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.356067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.356079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.356108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.365863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.365993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.366020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.366035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.366047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.366075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.375897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.376022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.376049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.376064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.376076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.376104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.385939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.386088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.386115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.386130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.386143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.386172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.395970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.396101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.396133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.396149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.396161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.396190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.405943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.406084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.406111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.406127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.406139] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.406167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.415988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.416136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.416163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.416178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.416190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.416219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.426039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.426224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.426251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.426266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.426279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.426307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.436087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.436228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.436255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.436270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.436288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.436317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.446082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.446236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.446263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.446278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.446290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.446319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.456103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.456264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.456290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.456306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.456318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.456347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.466284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.480 [2024-04-16 12:50:00.466422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.480 [2024-04-16 12:50:00.466449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.480 [2024-04-16 12:50:00.466464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.480 [2024-04-16 12:50:00.466477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.480 [2024-04-16 12:50:00.466506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.480 qpair failed and we were unable to recover it. 00:22:01.480 [2024-04-16 12:50:00.476180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.481 [2024-04-16 12:50:00.476320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.481 [2024-04-16 12:50:00.476346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.481 [2024-04-16 12:50:00.476361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.481 [2024-04-16 12:50:00.476374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.481 [2024-04-16 12:50:00.476402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.481 qpair failed and we were unable to recover it. 00:22:01.481 [2024-04-16 12:50:00.486194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.481 [2024-04-16 12:50:00.486351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.481 [2024-04-16 12:50:00.486378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.481 [2024-04-16 12:50:00.486394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.481 [2024-04-16 12:50:00.486406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.481 [2024-04-16 12:50:00.486435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.481 qpair failed and we were unable to recover it. 00:22:01.481 [2024-04-16 12:50:00.496217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.481 [2024-04-16 12:50:00.496348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.481 [2024-04-16 12:50:00.496375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.481 [2024-04-16 12:50:00.496390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.481 [2024-04-16 12:50:00.496402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.481 [2024-04-16 12:50:00.496431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.481 qpair failed and we were unable to recover it. 00:22:01.481 [2024-04-16 12:50:00.506257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.481 [2024-04-16 12:50:00.506405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.481 [2024-04-16 12:50:00.506433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.481 [2024-04-16 12:50:00.506448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.481 [2024-04-16 12:50:00.506460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.481 [2024-04-16 12:50:00.506489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.481 qpair failed and we were unable to recover it. 00:22:01.481 [2024-04-16 12:50:00.516297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.481 [2024-04-16 12:50:00.516507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.481 [2024-04-16 12:50:00.516535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.481 [2024-04-16 12:50:00.516550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.481 [2024-04-16 12:50:00.516571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.481 [2024-04-16 12:50:00.516603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.481 qpair failed and we were unable to recover it. 00:22:01.481 [2024-04-16 12:50:00.526391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.481 [2024-04-16 12:50:00.526518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.481 [2024-04-16 12:50:00.526544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.481 [2024-04-16 12:50:00.526559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.481 [2024-04-16 12:50:00.526590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.481 [2024-04-16 12:50:00.526620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.481 qpair failed and we were unable to recover it. 00:22:01.481 [2024-04-16 12:50:00.536397] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.481 [2024-04-16 12:50:00.536539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.481 [2024-04-16 12:50:00.536572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.481 [2024-04-16 12:50:00.536589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.481 [2024-04-16 12:50:00.536602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.481 [2024-04-16 12:50:00.536630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.481 qpair failed and we were unable to recover it. 00:22:01.481 [2024-04-16 12:50:00.546386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.481 [2024-04-16 12:50:00.546526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.481 [2024-04-16 12:50:00.546553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.481 [2024-04-16 12:50:00.546578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.481 [2024-04-16 12:50:00.546592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.481 [2024-04-16 12:50:00.546620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.481 qpair failed and we were unable to recover it. 00:22:01.740 [2024-04-16 12:50:00.556408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.740 [2024-04-16 12:50:00.556531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.740 [2024-04-16 12:50:00.556558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.740 [2024-04-16 12:50:00.556583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.740 [2024-04-16 12:50:00.556596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.740 [2024-04-16 12:50:00.556625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.740 qpair failed and we were unable to recover it. 00:22:01.740 [2024-04-16 12:50:00.566437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.740 [2024-04-16 12:50:00.566583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.740 [2024-04-16 12:50:00.566610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.740 [2024-04-16 12:50:00.566625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.740 [2024-04-16 12:50:00.566638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.740 [2024-04-16 12:50:00.566668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.740 qpair failed and we were unable to recover it. 00:22:01.740 [2024-04-16 12:50:00.576448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.740 [2024-04-16 12:50:00.576617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.740 [2024-04-16 12:50:00.576644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.740 [2024-04-16 12:50:00.576659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.740 [2024-04-16 12:50:00.576671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.740 [2024-04-16 12:50:00.576699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.740 qpair failed and we were unable to recover it. 00:22:01.740 [2024-04-16 12:50:00.586512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.740 [2024-04-16 12:50:00.586704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.740 [2024-04-16 12:50:00.586732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.740 [2024-04-16 12:50:00.586747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.740 [2024-04-16 12:50:00.586759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.740 [2024-04-16 12:50:00.586788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.740 qpair failed and we were unable to recover it. 00:22:01.740 [2024-04-16 12:50:00.596531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.740 [2024-04-16 12:50:00.596760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.740 [2024-04-16 12:50:00.596787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.740 [2024-04-16 12:50:00.596802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.740 [2024-04-16 12:50:00.596815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.596844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.606498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.606654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.606681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.606697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.606710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.606751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.616583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.616706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.616734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.616755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.616768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.616797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.626636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.626757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.626783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.626799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.626811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.626839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.636686] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.636856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.636883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.636897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.636909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.636938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.646740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.646868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.646893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.646908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.646920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.646948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.656751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.656874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.656900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.656915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.656927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.656956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.666798] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.666926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.666953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.666969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.666981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.667009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.676736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.676881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.676916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.676935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.676948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.676977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.686783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.686908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.686935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.686950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.686962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.686991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.696829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.696956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.696983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.696998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.697011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.697039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.706844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.706996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.707022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.707043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.741 [2024-04-16 12:50:00.707062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.741 [2024-04-16 12:50:00.707103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.741 qpair failed and we were unable to recover it. 00:22:01.741 [2024-04-16 12:50:00.716945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.741 [2024-04-16 12:50:00.717074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.741 [2024-04-16 12:50:00.717101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.741 [2024-04-16 12:50:00.717115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.717128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.717157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.726867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.726987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.727013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.727029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.727041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.727069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.736963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.737147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.737173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.737188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.737200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.737229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.746956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.747093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.747120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.747135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.747156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.747197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.757003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.757123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.757150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.757165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.757178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.757208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.767018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.767156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.767182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.767198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.767210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.767238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.777031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.777172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.777199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.777214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.777227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.777255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.787082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.787313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.787340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.787356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.787368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.787397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.797156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.797323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.797350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.797371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.797384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.797413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:01.742 [2024-04-16 12:50:00.807094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:01.742 [2024-04-16 12:50:00.807232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:01.742 [2024-04-16 12:50:00.807259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:01.742 [2024-04-16 12:50:00.807274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:01.742 [2024-04-16 12:50:00.807286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:01.742 [2024-04-16 12:50:00.807314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:01.742 qpair failed and we were unable to recover it. 00:22:02.001 [2024-04-16 12:50:00.817146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.001 [2024-04-16 12:50:00.817331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.001 [2024-04-16 12:50:00.817362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.001 [2024-04-16 12:50:00.817379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.001 [2024-04-16 12:50:00.817391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.001 [2024-04-16 12:50:00.817420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.001 qpair failed and we were unable to recover it. 00:22:02.001 [2024-04-16 12:50:00.827161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.001 [2024-04-16 12:50:00.827287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.001 [2024-04-16 12:50:00.827315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.001 [2024-04-16 12:50:00.827330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.001 [2024-04-16 12:50:00.827343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.001 [2024-04-16 12:50:00.827371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.001 qpair failed and we were unable to recover it. 00:22:02.001 [2024-04-16 12:50:00.837199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.001 [2024-04-16 12:50:00.837341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.001 [2024-04-16 12:50:00.837368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.001 [2024-04-16 12:50:00.837383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.001 [2024-04-16 12:50:00.837396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.001 [2024-04-16 12:50:00.837423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.001 qpair failed and we were unable to recover it. 00:22:02.001 [2024-04-16 12:50:00.847224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.001 [2024-04-16 12:50:00.847363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.001 [2024-04-16 12:50:00.847390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.001 [2024-04-16 12:50:00.847406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.001 [2024-04-16 12:50:00.847418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.001 [2024-04-16 12:50:00.847447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.001 qpair failed and we were unable to recover it. 00:22:02.001 [2024-04-16 12:50:00.857254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.001 [2024-04-16 12:50:00.857406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.001 [2024-04-16 12:50:00.857447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.001 [2024-04-16 12:50:00.857463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.001 [2024-04-16 12:50:00.857476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.001 [2024-04-16 12:50:00.857508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.001 qpair failed and we were unable to recover it. 00:22:02.001 [2024-04-16 12:50:00.867294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.001 [2024-04-16 12:50:00.867418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.867445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.867460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.867472] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.867504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.877314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.877451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.877478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.877493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.877505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.877534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.887331] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.887465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.887504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.887527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.887541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.887578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.897347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.897488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.897515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.897531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.897543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.897580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.907408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.907581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.907608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.907624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.907636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.907664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.917411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.917548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.917583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.917599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.917612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.917640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.927434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.927582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.927610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.927625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.927637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.927670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.937472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.937616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.937643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.937658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.937670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.937699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.947594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.947717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.947743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.947759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.947771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.947799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.957522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.957712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.957737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.957752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.957764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.957792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.967624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.967754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.967780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.967794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.967807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.967835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.977583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.002 [2024-04-16 12:50:00.977736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.002 [2024-04-16 12:50:00.977767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.002 [2024-04-16 12:50:00.977783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.002 [2024-04-16 12:50:00.977795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.002 [2024-04-16 12:50:00.977822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.002 qpair failed and we were unable to recover it. 00:22:02.002 [2024-04-16 12:50:00.987619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:00.987744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:00.987770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:00.987785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:00.987797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:00.987835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.003 [2024-04-16 12:50:00.997677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:00.997800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:00.997826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:00.997840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:00.997853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:00.997891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.003 [2024-04-16 12:50:01.007729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:01.007849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:01.007879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:01.007894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:01.007906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:01.007933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.003 [2024-04-16 12:50:01.017745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:01.017869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:01.017896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:01.017911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:01.017924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:01.017957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.003 [2024-04-16 12:50:01.027787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:01.027943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:01.027969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:01.027983] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:01.027995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:01.028032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.003 [2024-04-16 12:50:01.037776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:01.037903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:01.037929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:01.037944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:01.037956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:01.037983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.003 [2024-04-16 12:50:01.047767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:01.047884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:01.047909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:01.047923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:01.047936] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:01.047963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.003 [2024-04-16 12:50:01.057919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:01.058068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:01.058093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:01.058108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:01.058120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:01.058152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.003 [2024-04-16 12:50:01.067905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.003 [2024-04-16 12:50:01.068070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.003 [2024-04-16 12:50:01.068100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.003 [2024-04-16 12:50:01.068116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.003 [2024-04-16 12:50:01.068128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.003 [2024-04-16 12:50:01.068156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.003 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.077953] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.078101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.078127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.078141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.078154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.078182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.087883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.088052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.088077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.088093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.088105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.088132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.097984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.098133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.098159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.098174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.098186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.098214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.108097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.108269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.108294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.108309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.108322] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.108354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.118015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.118162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.118187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.118201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.118213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.118253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.128045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.128187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.128213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.128227] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.128240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.128277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.138074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.138227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.138253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.138267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.138279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.138307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.148048] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.148179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.148205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.148220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.148232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.148259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.158194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.158347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.158377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.158393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.158405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.158434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.168114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.168256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.168282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.168297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.168309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.168337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.178154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.178286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.178311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.178326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.178338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.178379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.188134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.188261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.188287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.188302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.188314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.263 [2024-04-16 12:50:01.188341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.263 qpair failed and we were unable to recover it. 00:22:02.263 [2024-04-16 12:50:01.198243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.263 [2024-04-16 12:50:01.198381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.263 [2024-04-16 12:50:01.198406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.263 [2024-04-16 12:50:01.198420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.263 [2024-04-16 12:50:01.198438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.198468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.208216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.208355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.208380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.208395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.208407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.208435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.218225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.218365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.218393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.218408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.218420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.218449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.228242] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.228384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.228410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.228425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.228438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.228465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.238391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.238543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.238577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.238593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.238605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.238633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.248320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.248465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.248492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.248506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.248518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.248546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.258321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.258455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.258481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.258496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.258508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.258547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.268438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.268579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.268605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.268620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.268632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.268660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.278420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.278561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.278595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.278610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.278622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.278650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.288444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.288589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.288615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.288630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.288647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.288676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.298441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.298622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.298647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.298662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.298674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.298702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.308490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.308647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.308672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.308687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.308700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.308727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.318599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.318730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.318756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.318771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.318783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.318811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.264 [2024-04-16 12:50:01.328547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.264 [2024-04-16 12:50:01.328686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.264 [2024-04-16 12:50:01.328711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.264 [2024-04-16 12:50:01.328726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.264 [2024-04-16 12:50:01.328738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.264 [2024-04-16 12:50:01.328765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.264 qpair failed and we were unable to recover it. 00:22:02.523 [2024-04-16 12:50:01.338607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.523 [2024-04-16 12:50:01.338741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.523 [2024-04-16 12:50:01.338766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.523 [2024-04-16 12:50:01.338781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.523 [2024-04-16 12:50:01.338793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.523 [2024-04-16 12:50:01.338821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.523 qpair failed and we were unable to recover it. 00:22:02.523 [2024-04-16 12:50:01.348615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.523 [2024-04-16 12:50:01.348750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.523 [2024-04-16 12:50:01.348776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.523 [2024-04-16 12:50:01.348792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.523 [2024-04-16 12:50:01.348804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.523 [2024-04-16 12:50:01.348839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.523 qpair failed and we were unable to recover it. 00:22:02.523 [2024-04-16 12:50:01.358687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.523 [2024-04-16 12:50:01.358828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.523 [2024-04-16 12:50:01.358853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.523 [2024-04-16 12:50:01.358871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.523 [2024-04-16 12:50:01.358884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.523 [2024-04-16 12:50:01.358912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.523 qpair failed and we were unable to recover it. 00:22:02.523 [2024-04-16 12:50:01.368668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.523 [2024-04-16 12:50:01.368791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.523 [2024-04-16 12:50:01.368816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.523 [2024-04-16 12:50:01.368831] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.523 [2024-04-16 12:50:01.368843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.523 [2024-04-16 12:50:01.368870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.523 qpair failed and we were unable to recover it. 00:22:02.523 [2024-04-16 12:50:01.378707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.523 [2024-04-16 12:50:01.378831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.523 [2024-04-16 12:50:01.378857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.523 [2024-04-16 12:50:01.378876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.523 [2024-04-16 12:50:01.378890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.523 [2024-04-16 12:50:01.378917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.523 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.388761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.388880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.388906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.388920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.388932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.388959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.398787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.398905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.398930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.398945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.398957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.398984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.408790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.408923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.408948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.408963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.408975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.409002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.418908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.419059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.419084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.419099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.419111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.419139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.428853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.428990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.429014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.429028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.429040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.429068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.438892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.439033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.439058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.439073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.439085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.439113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.448956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.449107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.449132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.449147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.449159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.449198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.459075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.459235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.459261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.459275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.459288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.459315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.469069] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.469224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.469250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.469270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.469282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.469310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.479018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.479154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.479179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.479193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.479206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.479233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.489042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.489179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.489205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.489219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.489232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.489259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.499091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.499241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.499267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.499282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.499294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.499332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.509067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.509198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.509223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.524 [2024-04-16 12:50:01.509238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.524 [2024-04-16 12:50:01.509250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.524 [2024-04-16 12:50:01.509277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.524 qpair failed and we were unable to recover it. 00:22:02.524 [2024-04-16 12:50:01.519186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.524 [2024-04-16 12:50:01.519358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.524 [2024-04-16 12:50:01.519383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.525 [2024-04-16 12:50:01.519398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.525 [2024-04-16 12:50:01.519410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.525 [2024-04-16 12:50:01.519438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.525 qpair failed and we were unable to recover it. 00:22:02.525 [2024-04-16 12:50:01.529138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.525 [2024-04-16 12:50:01.529269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.525 [2024-04-16 12:50:01.529294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.525 [2024-04-16 12:50:01.529309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.525 [2024-04-16 12:50:01.529321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.525 [2024-04-16 12:50:01.529358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.525 qpair failed and we were unable to recover it. 00:22:02.525 [2024-04-16 12:50:01.539150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.525 [2024-04-16 12:50:01.539288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.525 [2024-04-16 12:50:01.539313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.525 [2024-04-16 12:50:01.539328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.525 [2024-04-16 12:50:01.539340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.525 [2024-04-16 12:50:01.539367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.525 qpair failed and we were unable to recover it. 00:22:02.525 [2024-04-16 12:50:01.549177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.525 [2024-04-16 12:50:01.549311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.525 [2024-04-16 12:50:01.549336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.525 [2024-04-16 12:50:01.549351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.525 [2024-04-16 12:50:01.549363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.525 [2024-04-16 12:50:01.549391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.525 qpair failed and we were unable to recover it. 00:22:02.525 [2024-04-16 12:50:01.559281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.525 [2024-04-16 12:50:01.559432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.525 [2024-04-16 12:50:01.559457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.525 [2024-04-16 12:50:01.559477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.525 [2024-04-16 12:50:01.559490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.525 [2024-04-16 12:50:01.559523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.525 qpair failed and we were unable to recover it. 00:22:02.525 [2024-04-16 12:50:01.569280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.525 [2024-04-16 12:50:01.569470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.525 [2024-04-16 12:50:01.569496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.525 [2024-04-16 12:50:01.569511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.525 [2024-04-16 12:50:01.569522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.525 [2024-04-16 12:50:01.569550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.525 qpair failed and we were unable to recover it. 00:22:02.525 [2024-04-16 12:50:01.579276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.525 [2024-04-16 12:50:01.579412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.525 [2024-04-16 12:50:01.579438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.525 [2024-04-16 12:50:01.579452] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.525 [2024-04-16 12:50:01.579465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.525 [2024-04-16 12:50:01.579493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.525 qpair failed and we were unable to recover it. 00:22:02.525 [2024-04-16 12:50:01.589358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.525 [2024-04-16 12:50:01.589553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.525 [2024-04-16 12:50:01.589587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.525 [2024-04-16 12:50:01.589602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.525 [2024-04-16 12:50:01.589615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.525 [2024-04-16 12:50:01.589644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.525 qpair failed and we were unable to recover it. 00:22:02.784 [2024-04-16 12:50:01.599351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.784 [2024-04-16 12:50:01.599490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.784 [2024-04-16 12:50:01.599516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.784 [2024-04-16 12:50:01.599531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.784 [2024-04-16 12:50:01.599542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.784 [2024-04-16 12:50:01.599579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.784 qpair failed and we were unable to recover it. 00:22:02.784 [2024-04-16 12:50:01.609403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.784 [2024-04-16 12:50:01.609546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.784 [2024-04-16 12:50:01.609580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.784 [2024-04-16 12:50:01.609596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.784 [2024-04-16 12:50:01.609608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.784 [2024-04-16 12:50:01.609635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.784 qpair failed and we were unable to recover it. 00:22:02.784 [2024-04-16 12:50:01.619357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.784 [2024-04-16 12:50:01.619494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.784 [2024-04-16 12:50:01.619520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.784 [2024-04-16 12:50:01.619534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.784 [2024-04-16 12:50:01.619546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.784 [2024-04-16 12:50:01.619583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.784 qpair failed and we were unable to recover it. 00:22:02.784 [2024-04-16 12:50:01.629420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.784 [2024-04-16 12:50:01.629555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.784 [2024-04-16 12:50:01.629587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.784 [2024-04-16 12:50:01.629603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.784 [2024-04-16 12:50:01.629615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.784 [2024-04-16 12:50:01.629642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.784 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.639485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.639635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.639661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.639676] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.639688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.639716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.649442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.649574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.649607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.649623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.649635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.649663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.659459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.659685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.659711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.659726] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.659738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.659767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.669517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.669659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.669685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.669700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.669713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.669741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.679592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.679714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.679739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.679754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.679766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.679794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.689555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.689701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.689726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.689741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.689753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.689781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.699663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.699792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.699818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.699833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.699845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.699873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.709650] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.709775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.709801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.709816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.709828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.709856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.719697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.719834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.719859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.719874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.719886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.719914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.729716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.729839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.729865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.729880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.729892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.729920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.739746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.739870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.739901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.739917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.739929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.739957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.749768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.749930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.749956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.749971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.785 [2024-04-16 12:50:01.749983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.785 [2024-04-16 12:50:01.750011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.785 qpair failed and we were unable to recover it. 00:22:02.785 [2024-04-16 12:50:01.759877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.785 [2024-04-16 12:50:01.760037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.785 [2024-04-16 12:50:01.760062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.785 [2024-04-16 12:50:01.760077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.760089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.760118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.769826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.769952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.769978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.769993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.770004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.770032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.779881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.780018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.780044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.780058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.780070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.780104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.789913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.790048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.790073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.790088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.790100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.790128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.799925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.800073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.800099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.800114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.800126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.800154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.809989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.810124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.810150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.810165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.810176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.810203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.819960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.820110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.820135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.820150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.820163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.820190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.829974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.830115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.830146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.830162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.830174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.830202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.840117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.840258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.840284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.840298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.840310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.840339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:02.786 [2024-04-16 12:50:01.850088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:02.786 [2024-04-16 12:50:01.850209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:02.786 [2024-04-16 12:50:01.850235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:02.786 [2024-04-16 12:50:01.850249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:02.786 [2024-04-16 12:50:01.850262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:02.786 [2024-04-16 12:50:01.850290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:02.786 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.860066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.860206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.860233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.860248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.860260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:03.046 [2024-04-16 12:50:01.860289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.870100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.870273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.870299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.870314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.870326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:03.046 [2024-04-16 12:50:01.870360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.880144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.880291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.880317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.880332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.880344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:03.046 [2024-04-16 12:50:01.880372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.890208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.890391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.890417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.890431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.890444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:03.046 [2024-04-16 12:50:01.890471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.900196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.900335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.900361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.900375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.900387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ff5050 00:22:03.046 [2024-04-16 12:50:01.900415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.900673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002b60 is same with the state(5) to be set 00:22:03.046 [2024-04-16 12:50:01.910214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.910349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.910381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.910398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.910411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bb0000b90 00:22:03.046 [2024-04-16 12:50:01.910443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.920256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.920383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.920411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.920426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.920438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bb0000b90 00:22:03.046 [2024-04-16 12:50:01.920468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.930261] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.930389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.930421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.930438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.930451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:22:03.046 [2024-04-16 12:50:01.930482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.940299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.940467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.046 [2024-04-16 12:50:01.940494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.046 [2024-04-16 12:50:01.940510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.046 [2024-04-16 12:50:01.940523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:22:03.046 [2024-04-16 12:50:01.940553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:03.046 qpair failed and we were unable to recover it. 00:22:03.046 [2024-04-16 12:50:01.950311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.046 [2024-04-16 12:50:01.950437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.047 [2024-04-16 12:50:01.950470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.047 [2024-04-16 12:50:01.950487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.047 [2024-04-16 12:50:01.950500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bb8000b90 00:22:03.047 [2024-04-16 12:50:01.950532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:03.047 qpair failed and we were unable to recover it. 00:22:03.047 [2024-04-16 12:50:01.960347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:03.047 [2024-04-16 12:50:01.960485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:03.047 [2024-04-16 12:50:01.960513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:03.047 [2024-04-16 12:50:01.960535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:03.047 [2024-04-16 12:50:01.960549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7bb8000b90 00:22:03.047 [2024-04-16 12:50:01.960587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:03.047 qpair failed and we were unable to recover it. 00:22:03.047 [2024-04-16 12:50:01.960812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002b60 (9): Bad file descriptor 00:22:03.047 Initializing NVMe Controllers 00:22:03.047 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:03.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:03.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:03.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:03.047 Initialization complete. Launching workers. 00:22:03.047 Starting thread on core 1 00:22:03.047 Starting thread on core 2 00:22:03.047 Starting thread on core 3 00:22:03.047 Starting thread on core 0 00:22:03.047 12:50:01 -- host/target_disconnect.sh@59 -- # sync 00:22:03.047 00:22:03.047 real 0m11.554s 00:22:03.047 user 0m20.793s 00:22:03.047 sys 0m5.695s 00:22:03.047 12:50:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:03.047 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:22:03.047 ************************************ 00:22:03.047 END TEST nvmf_target_disconnect_tc2 00:22:03.047 ************************************ 00:22:03.047 12:50:01 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:22:03.047 12:50:01 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:03.047 12:50:01 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:22:03.047 12:50:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:03.047 12:50:01 -- nvmf/common.sh@117 -- # sync 00:22:03.047 12:50:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.047 12:50:01 -- nvmf/common.sh@120 -- # set +e 00:22:03.047 12:50:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.047 12:50:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.047 rmmod nvme_tcp 00:22:03.047 rmmod nvme_fabrics 00:22:03.047 rmmod nvme_keyring 00:22:03.047 12:50:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.047 12:50:02 -- nvmf/common.sh@124 -- # set -e 00:22:03.047 12:50:02 -- nvmf/common.sh@125 -- # return 0 00:22:03.047 12:50:02 -- nvmf/common.sh@478 -- # '[' -n 1270391 ']' 00:22:03.047 12:50:02 -- nvmf/common.sh@479 -- # killprocess 1270391 00:22:03.047 12:50:02 -- common/autotest_common.sh@936 -- # '[' -z 1270391 ']' 00:22:03.047 12:50:02 -- common/autotest_common.sh@940 -- # kill -0 1270391 00:22:03.047 12:50:02 -- common/autotest_common.sh@941 -- # uname 00:22:03.047 12:50:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:03.047 12:50:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1270391 00:22:03.047 12:50:02 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:22:03.047 12:50:02 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:22:03.047 12:50:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1270391' 00:22:03.047 killing process with pid 1270391 00:22:03.047 12:50:02 -- common/autotest_common.sh@955 -- # kill 1270391 00:22:03.047 12:50:02 -- common/autotest_common.sh@960 -- # wait 1270391 00:22:03.613 12:50:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:03.613 12:50:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:03.613 12:50:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:03.613 12:50:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.613 12:50:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.613 12:50:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.613 12:50:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.613 12:50:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.515 12:50:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.515 00:22:05.515 real 0m16.990s 00:22:05.515 user 0m47.538s 00:22:05.515 sys 0m7.989s 00:22:05.515 12:50:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:05.515 12:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:05.515 ************************************ 00:22:05.515 END TEST nvmf_target_disconnect 00:22:05.515 ************************************ 00:22:05.515 12:50:04 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:22:05.515 12:50:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:05.515 12:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:05.515 12:50:04 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:22:05.515 00:22:05.515 real 16m16.714s 00:22:05.515 user 37m11.440s 00:22:05.515 sys 4m39.394s 00:22:05.515 12:50:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:05.515 12:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:05.515 ************************************ 00:22:05.515 END TEST nvmf_tcp 00:22:05.515 ************************************ 00:22:05.515 12:50:04 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:22:05.515 12:50:04 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:05.515 12:50:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:05.515 12:50:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:05.515 12:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:05.773 ************************************ 00:22:05.773 START TEST spdkcli_nvmf_tcp 00:22:05.773 ************************************ 00:22:05.773 12:50:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:05.773 * Looking for test storage... 00:22:05.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:22:05.773 12:50:04 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:22:05.773 12:50:04 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:22:05.773 12:50:04 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:22:05.773 12:50:04 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.773 12:50:04 -- nvmf/common.sh@7 -- # uname -s 00:22:05.773 12:50:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.773 12:50:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.773 12:50:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.773 12:50:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.773 12:50:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.773 12:50:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.773 12:50:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.773 12:50:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.773 12:50:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.773 12:50:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.773 12:50:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:05.773 12:50:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:05.773 12:50:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.773 12:50:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.773 12:50:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.773 12:50:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.773 12:50:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.774 12:50:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.774 12:50:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.774 12:50:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.774 12:50:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.774 12:50:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.774 12:50:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.774 12:50:04 -- paths/export.sh@5 -- # export PATH 00:22:05.774 12:50:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.774 12:50:04 -- nvmf/common.sh@47 -- # : 0 00:22:05.774 12:50:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.774 12:50:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.774 12:50:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.774 12:50:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.774 12:50:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.774 12:50:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.774 12:50:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.774 12:50:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.774 12:50:04 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:05.774 12:50:04 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:05.774 12:50:04 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:05.774 12:50:04 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:05.774 12:50:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:05.774 12:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:05.774 12:50:04 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:05.774 12:50:04 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1271611 00:22:05.774 12:50:04 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:05.774 12:50:04 -- spdkcli/common.sh@34 -- # waitforlisten 1271611 00:22:05.774 12:50:04 -- common/autotest_common.sh@817 -- # '[' -z 1271611 ']' 00:22:05.774 12:50:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.774 12:50:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:05.774 12:50:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.774 12:50:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:05.774 12:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:05.774 [2024-04-16 12:50:04.711137] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:22:05.774 [2024-04-16 12:50:04.711217] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271611 ] 00:22:05.774 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.774 [2024-04-16 12:50:04.779598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:06.032 [2024-04-16 12:50:04.888588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.032 [2024-04-16 12:50:04.888597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.032 12:50:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:06.032 12:50:04 -- common/autotest_common.sh@850 -- # return 0 00:22:06.032 12:50:04 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:06.032 12:50:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:06.032 12:50:04 -- common/autotest_common.sh@10 -- # set +x 00:22:06.032 12:50:05 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:06.032 12:50:05 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:06.032 12:50:05 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:06.032 12:50:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:06.032 12:50:05 -- common/autotest_common.sh@10 -- # set +x 00:22:06.033 12:50:05 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:06.033 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:06.033 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:06.033 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:06.033 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:06.033 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:06.033 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:06.033 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:06.033 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:06.033 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:06.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:06.033 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:06.033 ' 00:22:06.599 [2024-04-16 12:50:05.382333] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:08.500 [2024-04-16 12:50:07.541396] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.871 [2024-04-16 12:50:08.765812] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:12.396 [2024-04-16 12:50:11.029018] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:14.326 [2024-04-16 12:50:12.967239] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:15.713 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:15.713 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:15.713 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:15.713 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:15.713 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:15.713 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:15.713 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:15.713 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:15.713 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:15.713 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:15.713 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:15.714 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:15.714 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:15.714 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:15.714 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:15.714 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:15.714 12:50:14 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:15.714 12:50:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:15.714 12:50:14 -- common/autotest_common.sh@10 -- # set +x 00:22:15.714 12:50:14 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:15.714 12:50:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:15.714 12:50:14 -- common/autotest_common.sh@10 -- # set +x 00:22:15.714 12:50:14 -- spdkcli/nvmf.sh@69 -- # check_match 00:22:15.714 12:50:14 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:15.972 12:50:14 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:15.972 12:50:15 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:15.972 12:50:15 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:15.972 12:50:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:15.972 12:50:15 -- common/autotest_common.sh@10 -- # set +x 00:22:16.230 12:50:15 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:16.230 12:50:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:16.230 12:50:15 -- common/autotest_common.sh@10 -- # set +x 00:22:16.230 12:50:15 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:16.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:16.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:16.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:16.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:16.230 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:16.230 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:16.230 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:16.230 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:16.230 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:16.230 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:16.230 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:16.230 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:16.230 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:16.230 ' 00:22:21.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:21.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:21.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:21.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:21.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:21.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:21.494 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:21.494 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:21.494 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:21.494 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:21.494 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:21.494 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:21.494 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:21.495 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:21.495 12:50:20 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:21.495 12:50:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:21.495 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:22:21.495 12:50:20 -- spdkcli/nvmf.sh@90 -- # killprocess 1271611 00:22:21.495 12:50:20 -- common/autotest_common.sh@936 -- # '[' -z 1271611 ']' 00:22:21.495 12:50:20 -- common/autotest_common.sh@940 -- # kill -0 1271611 00:22:21.495 12:50:20 -- common/autotest_common.sh@941 -- # uname 00:22:21.495 12:50:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:21.495 12:50:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1271611 00:22:21.495 12:50:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:21.495 12:50:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:21.495 12:50:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1271611' 00:22:21.495 killing process with pid 1271611 00:22:21.495 12:50:20 -- common/autotest_common.sh@955 -- # kill 1271611 00:22:21.495 [2024-04-16 12:50:20.383662] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:21.495 12:50:20 -- common/autotest_common.sh@960 -- # wait 1271611 00:22:21.753 12:50:20 -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:21.753 12:50:20 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:21.753 12:50:20 -- spdkcli/common.sh@13 -- # '[' -n 1271611 ']' 00:22:21.753 12:50:20 -- spdkcli/common.sh@14 -- # killprocess 1271611 00:22:21.753 12:50:20 -- common/autotest_common.sh@936 -- # '[' -z 1271611 ']' 00:22:21.753 12:50:20 -- common/autotest_common.sh@940 -- # kill -0 1271611 00:22:21.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1271611) - No such process 00:22:21.753 12:50:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1271611 is not found' 00:22:21.753 Process with pid 1271611 is not found 00:22:21.753 12:50:20 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:21.753 12:50:20 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:21.753 12:50:20 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:21.753 00:22:21.753 real 0m16.058s 00:22:21.753 user 0m33.808s 00:22:21.753 sys 0m0.839s 00:22:21.753 12:50:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:21.753 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:22:21.753 ************************************ 00:22:21.753 END TEST spdkcli_nvmf_tcp 00:22:21.753 ************************************ 00:22:21.753 12:50:20 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:21.753 12:50:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:21.753 12:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:21.753 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:22:21.753 ************************************ 00:22:21.753 START TEST nvmf_identify_passthru 00:22:21.753 ************************************ 00:22:21.753 12:50:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:22.012 * Looking for test storage... 00:22:22.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:22.012 12:50:20 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.012 12:50:20 -- nvmf/common.sh@7 -- # uname -s 00:22:22.012 12:50:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.012 12:50:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.012 12:50:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.012 12:50:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.012 12:50:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.012 12:50:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.012 12:50:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.012 12:50:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.012 12:50:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.012 12:50:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.012 12:50:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:22.012 12:50:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:22.012 12:50:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.012 12:50:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.012 12:50:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.012 12:50:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.012 12:50:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.012 12:50:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.012 12:50:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.012 12:50:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.012 12:50:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.012 12:50:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.012 12:50:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.012 12:50:20 -- paths/export.sh@5 -- # export PATH 00:22:22.012 12:50:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.012 12:50:20 -- nvmf/common.sh@47 -- # : 0 00:22:22.012 12:50:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.012 12:50:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.012 12:50:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.012 12:50:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.012 12:50:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.012 12:50:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.012 12:50:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.012 12:50:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.012 12:50:20 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.012 12:50:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.012 12:50:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.012 12:50:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.012 12:50:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.012 12:50:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.012 12:50:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.012 12:50:20 -- paths/export.sh@5 -- # export PATH 00:22:22.012 12:50:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.012 12:50:20 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:22.012 12:50:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:22.012 12:50:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.012 12:50:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:22.012 12:50:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:22.012 12:50:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:22.012 12:50:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.012 12:50:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:22.012 12:50:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.012 12:50:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:22.012 12:50:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:22.012 12:50:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.013 12:50:20 -- common/autotest_common.sh@10 -- # set +x 00:22:24.542 12:50:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:24.542 12:50:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.542 12:50:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.542 12:50:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.542 12:50:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.542 12:50:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.542 12:50:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.542 12:50:23 -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.542 12:50:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.542 12:50:23 -- nvmf/common.sh@296 -- # e810=() 00:22:24.542 12:50:23 -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.542 12:50:23 -- nvmf/common.sh@297 -- # x722=() 00:22:24.542 12:50:23 -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.542 12:50:23 -- nvmf/common.sh@298 -- # mlx=() 00:22:24.542 12:50:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.542 12:50:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.542 12:50:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.542 12:50:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.542 12:50:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.542 12:50:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.542 12:50:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.542 12:50:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.543 12:50:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.543 12:50:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:24.543 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:24.543 12:50:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.543 12:50:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:24.543 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:24.543 12:50:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.543 12:50:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.543 12:50:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.543 12:50:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:24.543 12:50:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.543 12:50:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:24.543 Found net devices under 0000:82:00.0: cvl_0_0 00:22:24.543 12:50:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.543 12:50:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.543 12:50:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.543 12:50:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:24.543 12:50:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.543 12:50:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:24.543 Found net devices under 0000:82:00.1: cvl_0_1 00:22:24.543 12:50:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.543 12:50:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:24.543 12:50:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:24.543 12:50:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:24.543 12:50:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.543 12:50:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.543 12:50:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.543 12:50:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:24.543 12:50:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.543 12:50:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.543 12:50:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:24.543 12:50:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.543 12:50:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.543 12:50:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:24.543 12:50:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:24.543 12:50:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.543 12:50:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.543 12:50:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.543 12:50:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.543 12:50:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:24.543 12:50:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.543 12:50:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.543 12:50:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.543 12:50:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:24.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:22:24.543 00:22:24.543 --- 10.0.0.2 ping statistics --- 00:22:24.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.543 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:22:24.543 12:50:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:22:24.543 00:22:24.543 --- 10.0.0.1 ping statistics --- 00:22:24.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.543 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:24.543 12:50:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.543 12:50:23 -- nvmf/common.sh@411 -- # return 0 00:22:24.543 12:50:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:24.543 12:50:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.543 12:50:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:24.543 12:50:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.543 12:50:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:24.543 12:50:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:24.543 12:50:23 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:24.543 12:50:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:24.543 12:50:23 -- common/autotest_common.sh@10 -- # set +x 00:22:24.543 12:50:23 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:24.543 12:50:23 -- common/autotest_common.sh@1510 -- # bdfs=() 00:22:24.543 12:50:23 -- common/autotest_common.sh@1510 -- # local bdfs 00:22:24.543 12:50:23 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:22:24.543 12:50:23 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:22:24.543 12:50:23 -- common/autotest_common.sh@1499 -- # bdfs=() 00:22:24.543 12:50:23 -- common/autotest_common.sh@1499 -- # local bdfs 00:22:24.543 12:50:23 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:24.543 12:50:23 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:24.543 12:50:23 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:22:24.543 12:50:23 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:22:24.543 12:50:23 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:81:00.0 00:22:24.543 12:50:23 -- common/autotest_common.sh@1513 -- # echo 0000:81:00.0 00:22:24.543 12:50:23 -- target/identify_passthru.sh@16 -- # bdf=0000:81:00.0 00:22:24.543 12:50:23 -- target/identify_passthru.sh@17 -- # '[' -z 0000:81:00.0 ']' 00:22:24.543 12:50:23 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:22:24.543 12:50:23 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:24.543 12:50:23 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:24.801 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.065 12:50:28 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ951302VM2P0BGN 00:22:30.065 12:50:28 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:81:00.0' -i 0 00:22:30.065 12:50:28 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:30.065 12:50:28 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:30.065 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.329 12:50:33 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:22:35.329 12:50:33 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:22:35.329 12:50:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:35.329 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.329 12:50:33 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:22:35.329 12:50:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:35.329 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.329 12:50:33 -- target/identify_passthru.sh@31 -- # nvmfpid=1276725 00:22:35.329 12:50:33 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:35.329 12:50:33 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.329 12:50:33 -- target/identify_passthru.sh@35 -- # waitforlisten 1276725 00:22:35.329 12:50:33 -- common/autotest_common.sh@817 -- # '[' -z 1276725 ']' 00:22:35.329 12:50:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.329 12:50:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:35.329 12:50:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.329 12:50:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:35.329 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.329 [2024-04-16 12:50:33.805684] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:22:35.329 [2024-04-16 12:50:33.805776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.329 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.329 [2024-04-16 12:50:33.884954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.329 [2024-04-16 12:50:33.993297] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.329 [2024-04-16 12:50:33.993359] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.329 [2024-04-16 12:50:33.993388] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.329 [2024-04-16 12:50:33.993401] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.329 [2024-04-16 12:50:33.993412] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.329 [2024-04-16 12:50:33.993481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.329 [2024-04-16 12:50:33.993555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.329 [2024-04-16 12:50:33.993951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.329 [2024-04-16 12:50:33.993954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.896 12:50:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:35.896 12:50:34 -- common/autotest_common.sh@850 -- # return 0 00:22:35.896 12:50:34 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:22:35.896 12:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.896 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:22:35.896 INFO: Log level set to 20 00:22:35.896 INFO: Requests: 00:22:35.896 { 00:22:35.896 "jsonrpc": "2.0", 00:22:35.896 "method": "nvmf_set_config", 00:22:35.896 "id": 1, 00:22:35.896 "params": { 00:22:35.896 "admin_cmd_passthru": { 00:22:35.896 "identify_ctrlr": true 00:22:35.896 } 00:22:35.896 } 00:22:35.896 } 00:22:35.896 00:22:35.896 INFO: response: 00:22:35.896 { 00:22:35.896 "jsonrpc": "2.0", 00:22:35.896 "id": 1, 00:22:35.896 "result": true 00:22:35.897 } 00:22:35.897 00:22:35.897 12:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.897 12:50:34 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:22:35.897 12:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.897 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:22:35.897 INFO: Setting log level to 20 00:22:35.897 INFO: Setting log level to 20 00:22:35.897 INFO: Log level set to 20 00:22:35.897 INFO: Log level set to 20 00:22:35.897 INFO: Requests: 00:22:35.897 { 00:22:35.897 "jsonrpc": "2.0", 00:22:35.897 "method": "framework_start_init", 00:22:35.897 "id": 1 00:22:35.897 } 00:22:35.897 00:22:35.897 INFO: Requests: 00:22:35.897 { 00:22:35.897 "jsonrpc": "2.0", 00:22:35.897 "method": "framework_start_init", 00:22:35.897 "id": 1 00:22:35.897 } 00:22:35.897 00:22:35.897 [2024-04-16 12:50:34.846770] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:35.897 INFO: response: 00:22:35.897 { 00:22:35.897 "jsonrpc": "2.0", 00:22:35.897 "id": 1, 00:22:35.897 "result": true 00:22:35.897 } 00:22:35.897 00:22:35.897 INFO: response: 00:22:35.897 { 00:22:35.897 "jsonrpc": "2.0", 00:22:35.897 "id": 1, 00:22:35.897 "result": true 00:22:35.897 } 00:22:35.897 00:22:35.897 12:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.897 12:50:34 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.897 12:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.897 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:22:35.897 INFO: Setting log level to 40 00:22:35.897 INFO: Setting log level to 40 00:22:35.897 INFO: Setting log level to 40 00:22:35.897 [2024-04-16 12:50:34.856617] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.897 12:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.897 12:50:34 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:35.897 12:50:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:35.897 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:22:35.897 12:50:34 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:81:00.0 00:22:35.897 12:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.897 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:22:39.176 Nvme0n1 00:22:39.176 12:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.176 12:50:37 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:39.176 12:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.176 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:22:39.176 12:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.176 12:50:37 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:39.176 12:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.176 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:22:39.176 12:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.176 12:50:37 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.176 12:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.176 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:22:39.176 [2024-04-16 12:50:37.755126] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.176 12:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.176 12:50:37 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:39.176 12:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.176 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:22:39.176 [2024-04-16 12:50:37.762826] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:39.176 [ 00:22:39.176 { 00:22:39.176 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:39.176 "subtype": "Discovery", 00:22:39.176 "listen_addresses": [], 00:22:39.176 "allow_any_host": true, 00:22:39.176 "hosts": [] 00:22:39.176 }, 00:22:39.176 { 00:22:39.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.176 "subtype": "NVMe", 00:22:39.176 "listen_addresses": [ 00:22:39.176 { 00:22:39.176 "transport": "TCP", 00:22:39.176 "trtype": "TCP", 00:22:39.176 "adrfam": "IPv4", 00:22:39.176 "traddr": "10.0.0.2", 00:22:39.176 "trsvcid": "4420" 00:22:39.176 } 00:22:39.176 ], 00:22:39.176 "allow_any_host": true, 00:22:39.176 "hosts": [], 00:22:39.176 "serial_number": "SPDK00000000000001", 00:22:39.176 "model_number": "SPDK bdev Controller", 00:22:39.176 "max_namespaces": 1, 00:22:39.176 "min_cntlid": 1, 00:22:39.176 "max_cntlid": 65519, 00:22:39.176 "namespaces": [ 00:22:39.176 { 00:22:39.176 "nsid": 1, 00:22:39.176 "bdev_name": "Nvme0n1", 00:22:39.176 "name": "Nvme0n1", 00:22:39.176 "nguid": "489C94535EC04A1BB5021411BDCB9F35", 00:22:39.177 "uuid": "489c9453-5ec0-4a1b-b502-1411bdcb9f35" 00:22:39.177 } 00:22:39.177 ] 00:22:39.177 } 00:22:39.177 ] 00:22:39.177 12:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.177 12:50:37 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.177 12:50:37 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:39.177 12:50:37 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:39.177 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.177 12:50:37 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ951302VM2P0BGN 00:22:39.177 12:50:37 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.177 12:50:37 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:39.177 12:50:37 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:39.177 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.177 12:50:38 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:22:39.177 12:50:38 -- target/identify_passthru.sh@63 -- # '[' PHLJ951302VM2P0BGN '!=' PHLJ951302VM2P0BGN ']' 00:22:39.177 12:50:38 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:22:39.177 12:50:38 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.177 12:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.177 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.177 12:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.177 12:50:38 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:39.177 12:50:38 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:39.177 12:50:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:39.177 12:50:38 -- nvmf/common.sh@117 -- # sync 00:22:39.177 12:50:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.177 12:50:38 -- nvmf/common.sh@120 -- # set +e 00:22:39.177 12:50:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.177 12:50:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.177 rmmod nvme_tcp 00:22:39.177 rmmod nvme_fabrics 00:22:39.177 rmmod nvme_keyring 00:22:39.177 12:50:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.177 12:50:38 -- nvmf/common.sh@124 -- # set -e 00:22:39.177 12:50:38 -- nvmf/common.sh@125 -- # return 0 00:22:39.177 12:50:38 -- nvmf/common.sh@478 -- # '[' -n 1276725 ']' 00:22:39.177 12:50:38 -- nvmf/common.sh@479 -- # killprocess 1276725 00:22:39.177 12:50:38 -- common/autotest_common.sh@936 -- # '[' -z 1276725 ']' 00:22:39.177 12:50:38 -- common/autotest_common.sh@940 -- # kill -0 1276725 00:22:39.177 12:50:38 -- common/autotest_common.sh@941 -- # uname 00:22:39.177 12:50:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.177 12:50:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1276725 00:22:39.177 12:50:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:39.177 12:50:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:39.177 12:50:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1276725' 00:22:39.177 killing process with pid 1276725 00:22:39.177 12:50:38 -- common/autotest_common.sh@955 -- # kill 1276725 00:22:39.177 [2024-04-16 12:50:38.196349] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:39.177 12:50:38 -- common/autotest_common.sh@960 -- # wait 1276725 00:22:41.705 12:50:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:41.705 12:50:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:41.705 12:50:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:41.705 12:50:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.705 12:50:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.705 12:50:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.705 12:50:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:41.705 12:50:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.609 12:50:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:43.609 00:22:43.609 real 0m21.901s 00:22:43.609 user 0m34.789s 00:22:43.609 sys 0m2.886s 00:22:43.609 12:50:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:43.609 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:22:43.609 ************************************ 00:22:43.609 END TEST nvmf_identify_passthru 00:22:43.609 ************************************ 00:22:43.868 12:50:42 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:22:43.868 12:50:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:43.868 12:50:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:43.868 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:22:43.868 ************************************ 00:22:43.868 START TEST nvmf_dif 00:22:43.868 ************************************ 00:22:43.868 12:50:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:22:43.868 * Looking for test storage... 00:22:43.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:43.868 12:50:42 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.868 12:50:42 -- nvmf/common.sh@7 -- # uname -s 00:22:43.868 12:50:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.868 12:50:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.868 12:50:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.868 12:50:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.868 12:50:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.868 12:50:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.868 12:50:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.868 12:50:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.868 12:50:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.868 12:50:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.868 12:50:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:43.868 12:50:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:43.868 12:50:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.868 12:50:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.868 12:50:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.868 12:50:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.868 12:50:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.868 12:50:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.868 12:50:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.868 12:50:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.868 12:50:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.868 12:50:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.868 12:50:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.868 12:50:42 -- paths/export.sh@5 -- # export PATH 00:22:43.869 12:50:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.869 12:50:42 -- nvmf/common.sh@47 -- # : 0 00:22:43.869 12:50:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.869 12:50:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.869 12:50:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.869 12:50:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.869 12:50:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.869 12:50:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.869 12:50:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.869 12:50:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.869 12:50:42 -- target/dif.sh@15 -- # NULL_META=16 00:22:43.869 12:50:42 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:43.869 12:50:42 -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:43.869 12:50:42 -- target/dif.sh@15 -- # NULL_DIF=1 00:22:43.869 12:50:42 -- target/dif.sh@135 -- # nvmftestinit 00:22:43.869 12:50:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:43.869 12:50:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.869 12:50:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:43.869 12:50:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:43.869 12:50:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:43.869 12:50:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.869 12:50:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:43.869 12:50:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.869 12:50:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:43.869 12:50:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:43.869 12:50:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.869 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:22:46.423 12:50:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:46.423 12:50:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.423 12:50:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.423 12:50:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.423 12:50:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.423 12:50:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.423 12:50:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.423 12:50:45 -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.423 12:50:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.423 12:50:45 -- nvmf/common.sh@296 -- # e810=() 00:22:46.423 12:50:45 -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.423 12:50:45 -- nvmf/common.sh@297 -- # x722=() 00:22:46.423 12:50:45 -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.423 12:50:45 -- nvmf/common.sh@298 -- # mlx=() 00:22:46.423 12:50:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.423 12:50:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.423 12:50:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.423 12:50:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.423 12:50:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.423 12:50:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.423 12:50:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:22:46.423 Found 0000:82:00.0 (0x8086 - 0x159b) 00:22:46.423 12:50:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.423 12:50:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:22:46.423 Found 0000:82:00.1 (0x8086 - 0x159b) 00:22:46.423 12:50:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.423 12:50:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.423 12:50:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.423 12:50:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:46.423 12:50:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.423 12:50:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:22:46.423 Found net devices under 0000:82:00.0: cvl_0_0 00:22:46.423 12:50:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.423 12:50:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.423 12:50:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.423 12:50:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:46.423 12:50:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.423 12:50:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:22:46.423 Found net devices under 0000:82:00.1: cvl_0_1 00:22:46.423 12:50:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.423 12:50:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:46.423 12:50:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:46.423 12:50:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:46.423 12:50:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:46.423 12:50:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.423 12:50:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.423 12:50:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.423 12:50:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.423 12:50:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.423 12:50:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.423 12:50:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.423 12:50:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.423 12:50:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.423 12:50:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.423 12:50:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.423 12:50:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.423 12:50:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.423 12:50:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.423 12:50:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.423 12:50:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.423 12:50:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.683 12:50:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.683 12:50:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.683 12:50:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:46.683 00:22:46.683 --- 10.0.0.2 ping statistics --- 00:22:46.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.683 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:46.683 12:50:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:22:46.683 00:22:46.683 --- 10.0.0.1 ping statistics --- 00:22:46.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.683 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:22:46.683 12:50:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.683 12:50:45 -- nvmf/common.sh@411 -- # return 0 00:22:46.683 12:50:45 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:22:46.683 12:50:45 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:48.063 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:48.063 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:22:48.063 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:48.063 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:48.063 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:48.063 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:48.063 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:48.063 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:48.063 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:48.063 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:48.063 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:48.063 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:48.063 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:48.063 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:48.063 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:48.063 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:48.063 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:48.063 12:50:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.063 12:50:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:48.063 12:50:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:48.063 12:50:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.063 12:50:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:48.063 12:50:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:48.063 12:50:46 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:48.063 12:50:46 -- target/dif.sh@137 -- # nvmfappstart 00:22:48.063 12:50:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:48.063 12:50:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:48.063 12:50:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.063 12:50:46 -- nvmf/common.sh@470 -- # nvmfpid=1280650 00:22:48.063 12:50:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:48.063 12:50:46 -- nvmf/common.sh@471 -- # waitforlisten 1280650 00:22:48.063 12:50:46 -- common/autotest_common.sh@817 -- # '[' -z 1280650 ']' 00:22:48.063 12:50:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.063 12:50:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:48.063 12:50:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.063 12:50:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:48.063 12:50:46 -- common/autotest_common.sh@10 -- # set +x 00:22:48.063 [2024-04-16 12:50:47.045689] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:22:48.063 [2024-04-16 12:50:47.045768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.063 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.063 [2024-04-16 12:50:47.118065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.321 [2024-04-16 12:50:47.222119] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.321 [2024-04-16 12:50:47.222172] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.321 [2024-04-16 12:50:47.222201] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.321 [2024-04-16 12:50:47.222220] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.321 [2024-04-16 12:50:47.222231] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.321 [2024-04-16 12:50:47.222259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.256 12:50:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:49.256 12:50:48 -- common/autotest_common.sh@850 -- # return 0 00:22:49.256 12:50:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:49.256 12:50:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:49.256 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.256 12:50:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.256 12:50:48 -- target/dif.sh@139 -- # create_transport 00:22:49.256 12:50:48 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:49.256 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.256 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.256 [2024-04-16 12:50:48.041777] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.256 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.256 12:50:48 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:49.256 12:50:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:49.256 12:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:49.256 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.256 ************************************ 00:22:49.256 START TEST fio_dif_1_default 00:22:49.257 ************************************ 00:22:49.257 12:50:48 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:22:49.257 12:50:48 -- target/dif.sh@86 -- # create_subsystems 0 00:22:49.257 12:50:48 -- target/dif.sh@28 -- # local sub 00:22:49.257 12:50:48 -- target/dif.sh@30 -- # for sub in "$@" 00:22:49.257 12:50:48 -- target/dif.sh@31 -- # create_subsystem 0 00:22:49.257 12:50:48 -- target/dif.sh@18 -- # local sub_id=0 00:22:49.257 12:50:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:49.257 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.257 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.257 bdev_null0 00:22:49.257 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.257 12:50:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:49.257 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.257 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.257 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.257 12:50:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:49.257 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.257 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.257 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.257 12:50:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:49.257 12:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.257 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.257 [2024-04-16 12:50:48.178303] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.257 12:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.257 12:50:48 -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:49.257 12:50:48 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:49.257 12:50:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:49.257 12:50:48 -- nvmf/common.sh@521 -- # config=() 00:22:49.257 12:50:48 -- nvmf/common.sh@521 -- # local subsystem config 00:22:49.257 12:50:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:49.257 12:50:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:49.257 12:50:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:49.257 { 00:22:49.257 "params": { 00:22:49.257 "name": "Nvme$subsystem", 00:22:49.257 "trtype": "$TEST_TRANSPORT", 00:22:49.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:49.257 "adrfam": "ipv4", 00:22:49.257 "trsvcid": "$NVMF_PORT", 00:22:49.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:49.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:49.257 "hdgst": ${hdgst:-false}, 00:22:49.257 "ddgst": ${ddgst:-false} 00:22:49.257 }, 00:22:49.257 "method": "bdev_nvme_attach_controller" 00:22:49.257 } 00:22:49.257 EOF 00:22:49.257 )") 00:22:49.257 12:50:48 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:49.257 12:50:48 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:49.257 12:50:48 -- target/dif.sh@82 -- # gen_fio_conf 00:22:49.257 12:50:48 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.257 12:50:48 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:49.257 12:50:48 -- target/dif.sh@54 -- # local file 00:22:49.257 12:50:48 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:49.257 12:50:48 -- common/autotest_common.sh@1327 -- # shift 00:22:49.257 12:50:48 -- target/dif.sh@56 -- # cat 00:22:49.257 12:50:48 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:49.257 12:50:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.257 12:50:48 -- nvmf/common.sh@543 -- # cat 00:22:49.257 12:50:48 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:49.257 12:50:48 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:49.257 12:50:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:49.257 12:50:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:49.257 12:50:48 -- target/dif.sh@72 -- # (( file <= files )) 00:22:49.257 12:50:48 -- nvmf/common.sh@545 -- # jq . 00:22:49.257 12:50:48 -- nvmf/common.sh@546 -- # IFS=, 00:22:49.257 12:50:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:49.257 "params": { 00:22:49.257 "name": "Nvme0", 00:22:49.257 "trtype": "tcp", 00:22:49.257 "traddr": "10.0.0.2", 00:22:49.257 "adrfam": "ipv4", 00:22:49.257 "trsvcid": "4420", 00:22:49.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:49.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:49.257 "hdgst": false, 00:22:49.257 "ddgst": false 00:22:49.257 }, 00:22:49.257 "method": "bdev_nvme_attach_controller" 00:22:49.257 }' 00:22:49.257 12:50:48 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:49.257 12:50:48 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:49.257 12:50:48 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.257 12:50:48 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:49.257 12:50:48 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:49.257 12:50:48 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:49.257 12:50:48 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:49.257 12:50:48 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:49.257 12:50:48 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:49.257 12:50:48 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:49.515 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:49.515 fio-3.35 00:22:49.515 Starting 1 thread 00:22:49.515 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.080 [2024-04-16 12:50:48.950021] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:22:50.080 [2024-04-16 12:50:48.950081] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:00.051 00:23:00.051 filename0: (groupid=0, jobs=1): err= 0: pid=1280940: Tue Apr 16 12:50:59 2024 00:23:00.051 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10027msec) 00:23:00.051 slat (nsec): min=4611, max=91000, avg=9526.27, stdev=4762.70 00:23:00.051 clat (usec): min=40816, max=46477, avg=41573.37, stdev=585.68 00:23:00.051 lat (usec): min=40824, max=46503, avg=41582.90, stdev=585.61 00:23:00.051 clat percentiles (usec): 00:23:00.051 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:23:00.051 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:23:00.051 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:00.051 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:23:00.051 | 99.99th=[46400] 00:23:00.051 bw ( KiB/s): min= 352, max= 416, per=99.85%, avg=384.00, stdev=10.38, samples=20 00:23:00.051 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:23:00.051 lat (msec) : 50=100.00% 00:23:00.051 cpu : usr=89.76%, sys=9.96%, ctx=28, majf=0, minf=295 00:23:00.051 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:00.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.051 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.051 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:00.051 00:23:00.051 Run status group 0 (all jobs): 00:23:00.051 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10027-10027msec 00:23:00.309 12:50:59 -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:00.309 12:50:59 -- target/dif.sh@43 -- # local sub 00:23:00.309 12:50:59 -- target/dif.sh@45 -- # for sub in "$@" 00:23:00.309 12:50:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:00.309 12:50:59 -- target/dif.sh@36 -- # local sub_id=0 00:23:00.309 12:50:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:00.309 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.309 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.309 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.309 12:50:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:00.309 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.309 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.309 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.309 00:23:00.309 real 0m11.186s 00:23:00.309 user 0m10.216s 00:23:00.309 sys 0m1.283s 00:23:00.309 12:50:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:00.309 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.309 ************************************ 00:23:00.309 END TEST fio_dif_1_default 00:23:00.309 ************************************ 00:23:00.309 12:50:59 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:00.309 12:50:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:00.309 12:50:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.309 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 ************************************ 00:23:00.569 START TEST fio_dif_1_multi_subsystems 00:23:00.569 ************************************ 00:23:00.569 12:50:59 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:23:00.569 12:50:59 -- target/dif.sh@92 -- # local files=1 00:23:00.569 12:50:59 -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:00.569 12:50:59 -- target/dif.sh@28 -- # local sub 00:23:00.569 12:50:59 -- target/dif.sh@30 -- # for sub in "$@" 00:23:00.569 12:50:59 -- target/dif.sh@31 -- # create_subsystem 0 00:23:00.569 12:50:59 -- target/dif.sh@18 -- # local sub_id=0 00:23:00.569 12:50:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:00.569 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.569 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 bdev_null0 00:23:00.569 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.569 12:50:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:00.569 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.569 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.569 12:50:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:00.569 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.569 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.569 12:50:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:00.569 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.569 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 [2024-04-16 12:50:59.491473] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.569 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.569 12:50:59 -- target/dif.sh@30 -- # for sub in "$@" 00:23:00.569 12:50:59 -- target/dif.sh@31 -- # create_subsystem 1 00:23:00.569 12:50:59 -- target/dif.sh@18 -- # local sub_id=1 00:23:00.569 12:50:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:00.569 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.569 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 bdev_null1 00:23:00.569 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.569 12:50:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:00.569 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.569 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.569 12:50:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:00.569 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.569 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.569 12:50:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.569 12:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.569 12:50:59 -- common/autotest_common.sh@10 -- # set +x 00:23:00.569 12:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.569 12:50:59 -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:00.569 12:50:59 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:00.569 12:50:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:00.569 12:50:59 -- nvmf/common.sh@521 -- # config=() 00:23:00.569 12:50:59 -- nvmf/common.sh@521 -- # local subsystem config 00:23:00.569 12:50:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:00.569 12:50:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:00.569 12:50:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:00.569 { 00:23:00.569 "params": { 00:23:00.569 "name": "Nvme$subsystem", 00:23:00.569 "trtype": "$TEST_TRANSPORT", 00:23:00.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.569 "adrfam": "ipv4", 00:23:00.569 "trsvcid": "$NVMF_PORT", 00:23:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.569 "hdgst": ${hdgst:-false}, 00:23:00.569 "ddgst": ${ddgst:-false} 00:23:00.569 }, 00:23:00.569 "method": "bdev_nvme_attach_controller" 00:23:00.569 } 00:23:00.569 EOF 00:23:00.569 )") 00:23:00.569 12:50:59 -- target/dif.sh@82 -- # gen_fio_conf 00:23:00.569 12:50:59 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:00.569 12:50:59 -- target/dif.sh@54 -- # local file 00:23:00.569 12:50:59 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:00.569 12:50:59 -- target/dif.sh@56 -- # cat 00:23:00.569 12:50:59 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:00.569 12:50:59 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:00.569 12:50:59 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:00.569 12:50:59 -- common/autotest_common.sh@1327 -- # shift 00:23:00.569 12:50:59 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:00.569 12:50:59 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.569 12:50:59 -- nvmf/common.sh@543 -- # cat 00:23:00.569 12:50:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:00.569 12:50:59 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:00.569 12:50:59 -- target/dif.sh@72 -- # (( file <= files )) 00:23:00.569 12:50:59 -- target/dif.sh@73 -- # cat 00:23:00.569 12:50:59 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:00.569 12:50:59 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:00.569 12:50:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:00.569 12:50:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:00.569 { 00:23:00.569 "params": { 00:23:00.569 "name": "Nvme$subsystem", 00:23:00.569 "trtype": "$TEST_TRANSPORT", 00:23:00.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.569 "adrfam": "ipv4", 00:23:00.569 "trsvcid": "$NVMF_PORT", 00:23:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.569 "hdgst": ${hdgst:-false}, 00:23:00.569 "ddgst": ${ddgst:-false} 00:23:00.569 }, 00:23:00.569 "method": "bdev_nvme_attach_controller" 00:23:00.569 } 00:23:00.569 EOF 00:23:00.569 )") 00:23:00.569 12:50:59 -- target/dif.sh@72 -- # (( file++ )) 00:23:00.569 12:50:59 -- target/dif.sh@72 -- # (( file <= files )) 00:23:00.569 12:50:59 -- nvmf/common.sh@543 -- # cat 00:23:00.569 12:50:59 -- nvmf/common.sh@545 -- # jq . 00:23:00.569 12:50:59 -- nvmf/common.sh@546 -- # IFS=, 00:23:00.569 12:50:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:00.569 "params": { 00:23:00.569 "name": "Nvme0", 00:23:00.569 "trtype": "tcp", 00:23:00.569 "traddr": "10.0.0.2", 00:23:00.569 "adrfam": "ipv4", 00:23:00.569 "trsvcid": "4420", 00:23:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:00.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:00.569 "hdgst": false, 00:23:00.569 "ddgst": false 00:23:00.569 }, 00:23:00.569 "method": "bdev_nvme_attach_controller" 00:23:00.569 },{ 00:23:00.569 "params": { 00:23:00.569 "name": "Nvme1", 00:23:00.569 "trtype": "tcp", 00:23:00.569 "traddr": "10.0.0.2", 00:23:00.569 "adrfam": "ipv4", 00:23:00.569 "trsvcid": "4420", 00:23:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.569 "hdgst": false, 00:23:00.569 "ddgst": false 00:23:00.569 }, 00:23:00.569 "method": "bdev_nvme_attach_controller" 00:23:00.569 }' 00:23:00.569 12:50:59 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:00.569 12:50:59 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:00.569 12:50:59 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.569 12:50:59 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:00.569 12:50:59 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:00.569 12:50:59 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:00.569 12:50:59 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:00.569 12:50:59 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:00.570 12:50:59 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:00.570 12:50:59 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:00.838 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:00.838 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:00.838 fio-3.35 00:23:00.838 Starting 2 threads 00:23:00.838 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.403 [2024-04-16 12:51:00.341275] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:01.403 [2024-04-16 12:51:00.341357] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:13.602 00:23:13.602 filename0: (groupid=0, jobs=1): err= 0: pid=1282450: Tue Apr 16 12:51:10 2024 00:23:13.602 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10013msec) 00:23:13.602 slat (nsec): min=5173, max=62442, avg=11474.41, stdev=5528.82 00:23:13.602 clat (usec): min=40860, max=43064, avg=41856.42, stdev=402.80 00:23:13.602 lat (usec): min=40868, max=43092, avg=41867.90, stdev=403.46 00:23:13.602 clat percentiles (usec): 00:23:13.602 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:23:13.602 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:23:13.602 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:13.602 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:23:13.602 | 99.99th=[43254] 00:23:13.602 bw ( KiB/s): min= 352, max= 384, per=33.73%, avg=380.80, stdev= 9.85, samples=20 00:23:13.602 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:23:13.602 lat (msec) : 50=100.00% 00:23:13.602 cpu : usr=94.88%, sys=4.80%, ctx=16, majf=0, minf=141 00:23:13.602 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:13.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.602 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.602 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:13.602 filename1: (groupid=0, jobs=1): err= 0: pid=1282451: Tue Apr 16 12:51:10 2024 00:23:13.602 read: IOPS=186, BW=745KiB/s (763kB/s)(7472KiB/10028msec) 00:23:13.602 slat (nsec): min=4847, max=40192, avg=9855.30, stdev=3911.53 00:23:13.602 clat (usec): min=678, max=42989, avg=21442.81, stdev=20593.52 00:23:13.602 lat (usec): min=685, max=43002, avg=21452.67, stdev=20593.45 00:23:13.602 clat percentiles (usec): 00:23:13.602 | 1.00th=[ 693], 5.00th=[ 725], 10.00th=[ 766], 20.00th=[ 799], 00:23:13.602 | 30.00th=[ 816], 40.00th=[ 865], 50.00th=[41157], 60.00th=[42206], 00:23:13.602 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:13.602 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:23:13.602 | 99.99th=[42730] 00:23:13.602 bw ( KiB/s): min= 704, max= 768, per=66.14%, avg=745.60, stdev=29.55, samples=20 00:23:13.602 iops : min= 176, max= 192, avg=186.40, stdev= 7.39, samples=20 00:23:13.602 lat (usec) : 750=8.03%, 1000=41.06% 00:23:13.602 lat (msec) : 2=0.80%, 50=50.11% 00:23:13.602 cpu : usr=94.58%, sys=5.10%, ctx=13, majf=0, minf=168 00:23:13.602 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:13.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.603 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.603 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:13.603 00:23:13.603 Run status group 0 (all jobs): 00:23:13.603 READ: bw=1126KiB/s (1153kB/s), 382KiB/s-745KiB/s (391kB/s-763kB/s), io=11.0MiB (11.6MB), run=10013-10028msec 00:23:13.603 12:51:10 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:13.603 12:51:10 -- target/dif.sh@43 -- # local sub 00:23:13.603 12:51:10 -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.603 12:51:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:13.603 12:51:10 -- target/dif.sh@36 -- # local sub_id=0 00:23:13.603 12:51:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.603 12:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 12:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.603 12:51:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:13.603 12:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 12:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.603 12:51:10 -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.603 12:51:10 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:13.603 12:51:10 -- target/dif.sh@36 -- # local sub_id=1 00:23:13.603 12:51:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.603 12:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 12:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.603 12:51:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:13.603 12:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 12:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.603 00:23:13.603 real 0m11.383s 00:23:13.603 user 0m20.336s 00:23:13.603 sys 0m1.291s 00:23:13.603 12:51:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 ************************************ 00:23:13.603 END TEST fio_dif_1_multi_subsystems 00:23:13.603 ************************************ 00:23:13.603 12:51:10 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:13.603 12:51:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:13.603 12:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 ************************************ 00:23:13.603 START TEST fio_dif_rand_params 00:23:13.603 ************************************ 00:23:13.603 12:51:10 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:23:13.603 12:51:10 -- target/dif.sh@100 -- # local NULL_DIF 00:23:13.603 12:51:10 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:13.603 12:51:10 -- target/dif.sh@103 -- # NULL_DIF=3 00:23:13.603 12:51:10 -- target/dif.sh@103 -- # bs=128k 00:23:13.603 12:51:10 -- target/dif.sh@103 -- # numjobs=3 00:23:13.603 12:51:10 -- target/dif.sh@103 -- # iodepth=3 00:23:13.603 12:51:10 -- target/dif.sh@103 -- # runtime=5 00:23:13.603 12:51:10 -- target/dif.sh@105 -- # create_subsystems 0 00:23:13.603 12:51:10 -- target/dif.sh@28 -- # local sub 00:23:13.603 12:51:10 -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.603 12:51:10 -- target/dif.sh@31 -- # create_subsystem 0 00:23:13.603 12:51:10 -- target/dif.sh@18 -- # local sub_id=0 00:23:13.603 12:51:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:13.603 12:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 bdev_null0 00:23:13.603 12:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.603 12:51:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:13.603 12:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 12:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.603 12:51:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:13.603 12:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 12:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.603 12:51:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.603 12:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.603 12:51:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 [2024-04-16 12:51:11.004214] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.603 12:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.603 12:51:11 -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:13.603 12:51:11 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:13.603 12:51:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.603 12:51:11 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.603 12:51:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:13.603 12:51:11 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:13.603 12:51:11 -- target/dif.sh@82 -- # gen_fio_conf 00:23:13.603 12:51:11 -- nvmf/common.sh@521 -- # config=() 00:23:13.603 12:51:11 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:13.603 12:51:11 -- nvmf/common.sh@521 -- # local subsystem config 00:23:13.603 12:51:11 -- target/dif.sh@54 -- # local file 00:23:13.603 12:51:11 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:13.603 12:51:11 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:13.603 12:51:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:13.603 12:51:11 -- target/dif.sh@56 -- # cat 00:23:13.603 12:51:11 -- common/autotest_common.sh@1327 -- # shift 00:23:13.603 12:51:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:13.603 { 00:23:13.603 "params": { 00:23:13.603 "name": "Nvme$subsystem", 00:23:13.603 "trtype": "$TEST_TRANSPORT", 00:23:13.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.603 "adrfam": "ipv4", 00:23:13.603 "trsvcid": "$NVMF_PORT", 00:23:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.603 "hdgst": ${hdgst:-false}, 00:23:13.603 "ddgst": ${ddgst:-false} 00:23:13.603 }, 00:23:13.603 "method": "bdev_nvme_attach_controller" 00:23:13.603 } 00:23:13.603 EOF 00:23:13.603 )") 00:23:13.603 12:51:11 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:13.603 12:51:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.603 12:51:11 -- nvmf/common.sh@543 -- # cat 00:23:13.603 12:51:11 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:13.603 12:51:11 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:13.603 12:51:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:13.603 12:51:11 -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.603 12:51:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:13.603 12:51:11 -- nvmf/common.sh@545 -- # jq . 00:23:13.603 12:51:11 -- nvmf/common.sh@546 -- # IFS=, 00:23:13.603 12:51:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:13.603 "params": { 00:23:13.603 "name": "Nvme0", 00:23:13.603 "trtype": "tcp", 00:23:13.603 "traddr": "10.0.0.2", 00:23:13.603 "adrfam": "ipv4", 00:23:13.603 "trsvcid": "4420", 00:23:13.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:13.603 "hdgst": false, 00:23:13.603 "ddgst": false 00:23:13.603 }, 00:23:13.603 "method": "bdev_nvme_attach_controller" 00:23:13.603 }' 00:23:13.603 12:51:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:13.603 12:51:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:13.603 12:51:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.603 12:51:11 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:13.603 12:51:11 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:13.603 12:51:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:13.603 12:51:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:13.603 12:51:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:13.603 12:51:11 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:13.603 12:51:11 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.603 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:13.603 ... 00:23:13.603 fio-3.35 00:23:13.603 Starting 3 threads 00:23:13.603 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.603 [2024-04-16 12:51:11.744022] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:13.603 [2024-04-16 12:51:11.744079] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:18.870 00:23:18.870 filename0: (groupid=0, jobs=1): err= 0: pid=1284450: Tue Apr 16 12:51:16 2024 00:23:18.870 read: IOPS=216, BW=27.0MiB/s (28.4MB/s)(136MiB/5043msec) 00:23:18.870 slat (nsec): min=6580, max=36945, avg=14099.32, stdev=3406.81 00:23:18.870 clat (usec): min=5387, max=56570, avg=13810.13, stdev=11123.04 00:23:18.870 lat (usec): min=5399, max=56586, avg=13824.23, stdev=11122.98 00:23:18.870 clat percentiles (usec): 00:23:18.870 | 1.00th=[ 5866], 5.00th=[ 6390], 10.00th=[ 6915], 20.00th=[ 8586], 00:23:18.870 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11207], 60.00th=[12256], 00:23:18.870 | 70.00th=[12911], 80.00th=[13698], 90.00th=[14615], 95.00th=[50070], 00:23:18.870 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:23:18.870 | 99.99th=[56361] 00:23:18.870 bw ( KiB/s): min=24064, max=34560, per=38.12%, avg=27878.40, stdev=3338.81, samples=10 00:23:18.871 iops : min= 188, max= 270, avg=217.80, stdev=26.08, samples=10 00:23:18.871 lat (msec) : 10=39.23%, 20=53.16%, 50=2.47%, 100=5.13% 00:23:18.871 cpu : usr=91.67%, sys=7.10%, ctx=87, majf=0, minf=43 00:23:18.871 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.871 issued rwts: total=1091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:18.871 filename0: (groupid=0, jobs=1): err= 0: pid=1284451: Tue Apr 16 12:51:16 2024 00:23:18.871 read: IOPS=201, BW=25.2MiB/s (26.5MB/s)(126MiB/5006msec) 00:23:18.871 slat (nsec): min=5190, max=34975, avg=13364.82, stdev=3631.89 00:23:18.871 clat (usec): min=5799, max=57770, avg=14835.35, stdev=11736.39 00:23:18.871 lat (usec): min=5812, max=57783, avg=14848.71, stdev=11736.35 00:23:18.871 clat percentiles (usec): 00:23:18.871 | 1.00th=[ 6194], 5.00th=[ 6915], 10.00th=[ 8291], 20.00th=[ 9241], 00:23:18.871 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11863], 60.00th=[12780], 00:23:18.871 | 70.00th=[13304], 80.00th=[14222], 90.00th=[16319], 95.00th=[50594], 00:23:18.871 | 99.00th=[54264], 99.50th=[55313], 99.90th=[56886], 99.95th=[57934], 00:23:18.871 | 99.99th=[57934] 00:23:18.871 bw ( KiB/s): min=21760, max=29696, per=35.29%, avg=25804.80, stdev=2759.32, samples=10 00:23:18.871 iops : min= 170, max= 232, avg=201.60, stdev=21.56, samples=10 00:23:18.871 lat (msec) : 10=33.73%, 20=57.37%, 50=2.47%, 100=6.43% 00:23:18.871 cpu : usr=92.37%, sys=7.17%, ctx=13, majf=0, minf=73 00:23:18.871 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.871 issued rwts: total=1011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:18.871 filename0: (groupid=0, jobs=1): err= 0: pid=1284452: Tue Apr 16 12:51:16 2024 00:23:18.871 read: IOPS=154, BW=19.3MiB/s (20.3MB/s)(97.4MiB/5036msec) 00:23:18.871 slat (nsec): min=5270, max=43862, avg=13984.35, stdev=3671.09 00:23:18.871 clat (usec): min=4934, max=59131, avg=19371.54, stdev=15244.08 00:23:18.871 lat (usec): min=4946, max=59144, avg=19385.53, stdev=15244.19 00:23:18.871 clat percentiles (usec): 00:23:18.871 | 1.00th=[ 6259], 5.00th=[ 7308], 10.00th=[ 8586], 20.00th=[10290], 00:23:18.871 | 30.00th=[11469], 40.00th=[13304], 50.00th=[14615], 60.00th=[15270], 00:23:18.871 | 70.00th=[15926], 80.00th=[17171], 90.00th=[53216], 95.00th=[55837], 00:23:18.871 | 99.00th=[57934], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:23:18.871 | 99.99th=[58983] 00:23:18.871 bw ( KiB/s): min=15360, max=25344, per=27.17%, avg=19869.80, stdev=3229.02, samples=10 00:23:18.871 iops : min= 120, max= 198, avg=155.20, stdev=25.21, samples=10 00:23:18.871 lat (msec) : 10=17.33%, 20=66.50%, 50=2.05%, 100=14.12% 00:23:18.871 cpu : usr=88.62%, sys=8.34%, ctx=331, majf=0, minf=46 00:23:18.871 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.871 issued rwts: total=779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:18.871 00:23:18.871 Run status group 0 (all jobs): 00:23:18.871 READ: bw=71.4MiB/s (74.9MB/s), 19.3MiB/s-27.0MiB/s (20.3MB/s-28.4MB/s), io=360MiB (378MB), run=5006-5043msec 00:23:18.871 12:51:17 -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:18.871 12:51:17 -- target/dif.sh@43 -- # local sub 00:23:18.871 12:51:17 -- target/dif.sh@45 -- # for sub in "$@" 00:23:18.871 12:51:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:18.871 12:51:17 -- target/dif.sh@36 -- # local sub_id=0 00:23:18.871 12:51:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@109 -- # NULL_DIF=2 00:23:18.871 12:51:17 -- target/dif.sh@109 -- # bs=4k 00:23:18.871 12:51:17 -- target/dif.sh@109 -- # numjobs=8 00:23:18.871 12:51:17 -- target/dif.sh@109 -- # iodepth=16 00:23:18.871 12:51:17 -- target/dif.sh@109 -- # runtime= 00:23:18.871 12:51:17 -- target/dif.sh@109 -- # files=2 00:23:18.871 12:51:17 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:18.871 12:51:17 -- target/dif.sh@28 -- # local sub 00:23:18.871 12:51:17 -- target/dif.sh@30 -- # for sub in "$@" 00:23:18.871 12:51:17 -- target/dif.sh@31 -- # create_subsystem 0 00:23:18.871 12:51:17 -- target/dif.sh@18 -- # local sub_id=0 00:23:18.871 12:51:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 bdev_null0 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 [2024-04-16 12:51:17.191595] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@30 -- # for sub in "$@" 00:23:18.871 12:51:17 -- target/dif.sh@31 -- # create_subsystem 1 00:23:18.871 12:51:17 -- target/dif.sh@18 -- # local sub_id=1 00:23:18.871 12:51:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 bdev_null1 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@30 -- # for sub in "$@" 00:23:18.871 12:51:17 -- target/dif.sh@31 -- # create_subsystem 2 00:23:18.871 12:51:17 -- target/dif.sh@18 -- # local sub_id=2 00:23:18.871 12:51:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 bdev_null2 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:18.871 12:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.871 12:51:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.871 12:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.871 12:51:17 -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:18.871 12:51:17 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:18.871 12:51:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:18.871 12:51:17 -- nvmf/common.sh@521 -- # config=() 00:23:18.871 12:51:17 -- nvmf/common.sh@521 -- # local subsystem config 00:23:18.871 12:51:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:18.871 12:51:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.871 12:51:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:18.871 { 00:23:18.871 "params": { 00:23:18.871 "name": "Nvme$subsystem", 00:23:18.871 "trtype": "$TEST_TRANSPORT", 00:23:18.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.871 "adrfam": "ipv4", 00:23:18.871 "trsvcid": "$NVMF_PORT", 00:23:18.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.871 "hdgst": ${hdgst:-false}, 00:23:18.871 "ddgst": ${ddgst:-false} 00:23:18.871 }, 00:23:18.871 "method": "bdev_nvme_attach_controller" 00:23:18.871 } 00:23:18.871 EOF 00:23:18.871 )") 00:23:18.871 12:51:17 -- target/dif.sh@82 -- # gen_fio_conf 00:23:18.872 12:51:17 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.872 12:51:17 -- target/dif.sh@54 -- # local file 00:23:18.872 12:51:17 -- target/dif.sh@56 -- # cat 00:23:18.872 12:51:17 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:18.872 12:51:17 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:18.872 12:51:17 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:18.872 12:51:17 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:18.872 12:51:17 -- common/autotest_common.sh@1327 -- # shift 00:23:18.872 12:51:17 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:18.872 12:51:17 -- nvmf/common.sh@543 -- # cat 00:23:18.872 12:51:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.872 12:51:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:18.872 12:51:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:18.872 12:51:17 -- target/dif.sh@72 -- # (( file <= files )) 00:23:18.872 12:51:17 -- target/dif.sh@73 -- # cat 00:23:18.872 12:51:17 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:18.872 12:51:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:18.872 12:51:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:18.872 12:51:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:18.872 { 00:23:18.872 "params": { 00:23:18.872 "name": "Nvme$subsystem", 00:23:18.872 "trtype": "$TEST_TRANSPORT", 00:23:18.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.872 "adrfam": "ipv4", 00:23:18.872 "trsvcid": "$NVMF_PORT", 00:23:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.872 "hdgst": ${hdgst:-false}, 00:23:18.872 "ddgst": ${ddgst:-false} 00:23:18.872 }, 00:23:18.872 "method": "bdev_nvme_attach_controller" 00:23:18.872 } 00:23:18.872 EOF 00:23:18.872 )") 00:23:18.872 12:51:17 -- nvmf/common.sh@543 -- # cat 00:23:18.872 12:51:17 -- target/dif.sh@72 -- # (( file++ )) 00:23:18.872 12:51:17 -- target/dif.sh@72 -- # (( file <= files )) 00:23:18.872 12:51:17 -- target/dif.sh@73 -- # cat 00:23:18.872 12:51:17 -- target/dif.sh@72 -- # (( file++ )) 00:23:18.872 12:51:17 -- target/dif.sh@72 -- # (( file <= files )) 00:23:18.872 12:51:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:18.872 12:51:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:18.872 { 00:23:18.872 "params": { 00:23:18.872 "name": "Nvme$subsystem", 00:23:18.872 "trtype": "$TEST_TRANSPORT", 00:23:18.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.872 "adrfam": "ipv4", 00:23:18.872 "trsvcid": "$NVMF_PORT", 00:23:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.872 "hdgst": ${hdgst:-false}, 00:23:18.872 "ddgst": ${ddgst:-false} 00:23:18.872 }, 00:23:18.872 "method": "bdev_nvme_attach_controller" 00:23:18.872 } 00:23:18.872 EOF 00:23:18.872 )") 00:23:18.872 12:51:17 -- nvmf/common.sh@543 -- # cat 00:23:18.872 12:51:17 -- nvmf/common.sh@545 -- # jq . 00:23:18.872 12:51:17 -- nvmf/common.sh@546 -- # IFS=, 00:23:18.872 12:51:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:18.872 "params": { 00:23:18.872 "name": "Nvme0", 00:23:18.872 "trtype": "tcp", 00:23:18.872 "traddr": "10.0.0.2", 00:23:18.872 "adrfam": "ipv4", 00:23:18.872 "trsvcid": "4420", 00:23:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:18.872 "hdgst": false, 00:23:18.872 "ddgst": false 00:23:18.872 }, 00:23:18.872 "method": "bdev_nvme_attach_controller" 00:23:18.872 },{ 00:23:18.872 "params": { 00:23:18.872 "name": "Nvme1", 00:23:18.872 "trtype": "tcp", 00:23:18.872 "traddr": "10.0.0.2", 00:23:18.872 "adrfam": "ipv4", 00:23:18.872 "trsvcid": "4420", 00:23:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.872 "hdgst": false, 00:23:18.872 "ddgst": false 00:23:18.872 }, 00:23:18.872 "method": "bdev_nvme_attach_controller" 00:23:18.872 },{ 00:23:18.872 "params": { 00:23:18.872 "name": "Nvme2", 00:23:18.872 "trtype": "tcp", 00:23:18.872 "traddr": "10.0.0.2", 00:23:18.872 "adrfam": "ipv4", 00:23:18.872 "trsvcid": "4420", 00:23:18.872 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.872 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.872 "hdgst": false, 00:23:18.872 "ddgst": false 00:23:18.872 }, 00:23:18.872 "method": "bdev_nvme_attach_controller" 00:23:18.872 }' 00:23:18.872 12:51:17 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:18.872 12:51:17 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:18.872 12:51:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.872 12:51:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:18.872 12:51:17 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:18.872 12:51:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:18.872 12:51:17 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:18.872 12:51:17 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:18.872 12:51:17 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:18.872 12:51:17 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.872 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:18.872 ... 00:23:18.872 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:18.872 ... 00:23:18.872 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:18.872 ... 00:23:18.872 fio-3.35 00:23:18.872 Starting 24 threads 00:23:18.872 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.438 [2024-04-16 12:51:18.468969] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:19.438 [2024-04-16 12:51:18.469035] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:31.647 00:23:31.647 filename0: (groupid=0, jobs=1): err= 0: pid=1285319: Tue Apr 16 12:51:28 2024 00:23:31.647 read: IOPS=61, BW=245KiB/s (251kB/s)(2480KiB/10106msec) 00:23:31.647 slat (usec): min=4, max=496, avg=13.24, stdev=20.34 00:23:31.647 clat (msec): min=168, max=414, avg=259.51, stdev=32.86 00:23:31.647 lat (msec): min=168, max=414, avg=259.52, stdev=32.86 00:23:31.647 clat percentiles (msec): 00:23:31.647 | 1.00th=[ 182], 5.00th=[ 209], 10.00th=[ 213], 20.00th=[ 234], 00:23:31.647 | 30.00th=[ 247], 40.00th=[ 257], 50.00th=[ 268], 60.00th=[ 271], 00:23:31.648 | 70.00th=[ 275], 80.00th=[ 275], 90.00th=[ 296], 95.00th=[ 305], 00:23:31.648 | 99.00th=[ 359], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:23:31.648 | 99.99th=[ 414] 00:23:31.648 bw ( KiB/s): min= 128, max= 304, per=4.43%, avg=241.60, stdev=39.50, samples=20 00:23:31.648 iops : min= 32, max= 76, avg=60.40, stdev= 9.88, samples=20 00:23:31.648 lat (msec) : 250=30.65%, 500=69.35% 00:23:31.648 cpu : usr=97.51%, sys=1.66%, ctx=38, majf=0, minf=39 00:23:31.648 IO depths : 1=0.5%, 2=1.6%, 4=9.5%, 8=76.3%, 16=12.1%, 32=0.0%, >=64=0.0% 00:23:31.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 complete : 0=0.0%, 4=89.7%, 8=4.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.648 filename0: (groupid=0, jobs=1): err= 0: pid=1285320: Tue Apr 16 12:51:28 2024 00:23:31.648 read: IOPS=68, BW=272KiB/s (279kB/s)(2752KiB/10110msec) 00:23:31.648 slat (nsec): min=4253, max=30918, avg=10306.47, stdev=2990.39 00:23:31.648 clat (msec): min=20, max=279, avg=234.56, stdev=53.60 00:23:31.648 lat (msec): min=20, max=279, avg=234.57, stdev=53.60 00:23:31.648 clat percentiles (msec): 00:23:31.648 | 1.00th=[ 21], 5.00th=[ 174], 10.00th=[ 186], 20.00th=[ 197], 00:23:31.648 | 30.00th=[ 222], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 266], 00:23:31.648 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 275], 95.00th=[ 279], 00:23:31.648 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:23:31.648 | 99.99th=[ 279] 00:23:31.648 bw ( KiB/s): min= 144, max= 496, per=4.92%, avg=268.80, stdev=83.96, samples=20 00:23:31.648 iops : min= 36, max= 124, avg=67.20, stdev=20.99, samples=20 00:23:31.648 lat (msec) : 50=2.33%, 100=2.33%, 250=44.04%, 500=51.31% 00:23:31.648 cpu : usr=98.42%, sys=1.20%, ctx=16, majf=0, minf=48 00:23:31.648 IO depths : 1=0.4%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:23:31.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.648 filename0: (groupid=0, jobs=1): err= 0: pid=1285321: Tue Apr 16 12:51:28 2024 00:23:31.648 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10093msec) 00:23:31.648 slat (usec): min=8, max=109, avg=22.49, stdev=21.30 00:23:31.648 clat (msec): min=201, max=438, avg=279.58, stdev=39.52 00:23:31.648 lat (msec): min=201, max=438, avg=279.60, stdev=39.52 00:23:31.648 clat percentiles (msec): 00:23:31.648 | 1.00th=[ 203], 5.00th=[ 232], 10.00th=[ 241], 20.00th=[ 253], 00:23:31.648 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:23:31.648 | 70.00th=[ 279], 80.00th=[ 292], 90.00th=[ 342], 95.00th=[ 368], 00:23:31.648 | 99.00th=[ 388], 99.50th=[ 414], 99.90th=[ 439], 99.95th=[ 439], 00:23:31.648 | 99.99th=[ 439] 00:23:31.648 bw ( KiB/s): min= 128, max= 256, per=4.10%, avg=224.00, stdev=56.87, samples=20 00:23:31.648 iops : min= 32, max= 64, avg=56.00, stdev=14.22, samples=20 00:23:31.648 lat (msec) : 250=17.36%, 500=82.64% 00:23:31.648 cpu : usr=98.37%, sys=1.24%, ctx=15, majf=0, minf=38 00:23:31.648 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:23:31.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.648 filename0: (groupid=0, jobs=1): err= 0: pid=1285322: Tue Apr 16 12:51:28 2024 00:23:31.648 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10077msec) 00:23:31.648 slat (nsec): min=8253, max=93516, avg=26187.15, stdev=26144.96 00:23:31.648 clat (msec): min=161, max=492, avg=296.12, stdev=51.78 00:23:31.648 lat (msec): min=161, max=492, avg=296.15, stdev=51.79 00:23:31.648 clat percentiles (msec): 00:23:31.648 | 1.00th=[ 161], 5.00th=[ 243], 10.00th=[ 257], 20.00th=[ 264], 00:23:31.648 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 279], 00:23:31.648 | 70.00th=[ 317], 80.00th=[ 342], 90.00th=[ 380], 95.00th=[ 393], 00:23:31.648 | 99.00th=[ 418], 99.50th=[ 477], 99.90th=[ 493], 99.95th=[ 493], 00:23:31.648 | 99.99th=[ 493] 00:23:31.648 bw ( KiB/s): min= 128, max= 256, per=3.88%, avg=211.20, stdev=57.95, samples=20 00:23:31.648 iops : min= 32, max= 64, avg=52.80, stdev=14.49, samples=20 00:23:31.648 lat (msec) : 250=6.62%, 500=93.38% 00:23:31.648 cpu : usr=98.12%, sys=1.33%, ctx=25, majf=0, minf=34 00:23:31.648 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:23:31.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.648 filename0: (groupid=0, jobs=1): err= 0: pid=1285323: Tue Apr 16 12:51:28 2024 00:23:31.648 read: IOPS=64, BW=259KiB/s (265kB/s)(2616KiB/10110msec) 00:23:31.648 slat (nsec): min=7348, max=93748, avg=14386.74, stdev=14052.65 00:23:31.648 clat (msec): min=162, max=307, avg=246.96, stdev=34.31 00:23:31.648 lat (msec): min=162, max=307, avg=246.97, stdev=34.32 00:23:31.648 clat percentiles (msec): 00:23:31.648 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 203], 20.00th=[ 213], 00:23:31.648 | 30.00th=[ 234], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 262], 00:23:31.648 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 292], 00:23:31.648 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:23:31.648 | 99.99th=[ 309] 00:23:31.648 bw ( KiB/s): min= 128, max= 384, per=4.68%, avg=255.20, stdev=57.21, samples=20 00:23:31.648 iops : min= 32, max= 96, avg=63.80, stdev=14.30, samples=20 00:23:31.648 lat (msec) : 250=38.84%, 500=61.16% 00:23:31.648 cpu : usr=97.62%, sys=1.62%, ctx=120, majf=0, minf=65 00:23:31.648 IO depths : 1=2.1%, 2=8.4%, 4=25.1%, 8=54.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:23:31.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.648 filename0: (groupid=0, jobs=1): err= 0: pid=1285324: Tue Apr 16 12:51:28 2024 00:23:31.648 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10082msec) 00:23:31.648 slat (nsec): min=6217, max=72278, avg=16961.53, stdev=10525.18 00:23:31.648 clat (msec): min=189, max=450, avg=287.32, stdev=40.17 00:23:31.648 lat (msec): min=189, max=450, avg=287.33, stdev=40.17 00:23:31.648 clat percentiles (msec): 00:23:31.648 | 1.00th=[ 203], 5.00th=[ 241], 10.00th=[ 255], 20.00th=[ 262], 00:23:31.648 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:23:31.648 | 70.00th=[ 296], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 359], 00:23:31.648 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:23:31.648 | 99.99th=[ 451] 00:23:31.648 bw ( KiB/s): min= 128, max= 256, per=3.99%, avg=217.60, stdev=55.04, samples=20 00:23:31.648 iops : min= 32, max= 64, avg=54.40, stdev=13.76, samples=20 00:23:31.648 lat (msec) : 250=7.14%, 500=92.86% 00:23:31.648 cpu : usr=98.09%, sys=1.37%, ctx=18, majf=0, minf=40 00:23:31.648 IO depths : 1=2.7%, 2=8.0%, 4=22.3%, 8=57.1%, 16=9.8%, 32=0.0%, >=64=0.0% 00:23:31.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.648 filename0: (groupid=0, jobs=1): err= 0: pid=1285325: Tue Apr 16 12:51:28 2024 00:23:31.648 read: IOPS=54, BW=218KiB/s (223kB/s)(2200KiB/10099msec) 00:23:31.648 slat (usec): min=6, max=112, avg=35.64, stdev=30.65 00:23:31.648 clat (msec): min=186, max=518, avg=293.02, stdev=42.55 00:23:31.648 lat (msec): min=186, max=518, avg=293.06, stdev=42.57 00:23:31.648 clat percentiles (msec): 00:23:31.648 | 1.00th=[ 192], 5.00th=[ 255], 10.00th=[ 264], 20.00th=[ 271], 00:23:31.648 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 275], 60.00th=[ 292], 00:23:31.648 | 70.00th=[ 305], 80.00th=[ 317], 90.00th=[ 342], 95.00th=[ 384], 00:23:31.648 | 99.00th=[ 414], 99.50th=[ 435], 99.90th=[ 518], 99.95th=[ 518], 00:23:31.648 | 99.99th=[ 518] 00:23:31.648 bw ( KiB/s): min= 128, max= 256, per=3.91%, avg=213.60, stdev=53.26, samples=20 00:23:31.648 iops : min= 32, max= 64, avg=53.40, stdev=13.32, samples=20 00:23:31.648 lat (msec) : 250=4.36%, 500=95.27%, 750=0.36% 00:23:31.648 cpu : usr=97.83%, sys=1.55%, ctx=26, majf=0, minf=33 00:23:31.648 IO depths : 1=1.8%, 2=4.9%, 4=15.5%, 8=67.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:23:31.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 complete : 0=0.0%, 4=91.3%, 8=3.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 issued rwts: total=550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.648 filename0: (groupid=0, jobs=1): err= 0: pid=1285326: Tue Apr 16 12:51:28 2024 00:23:31.648 read: IOPS=58, BW=233KiB/s (238kB/s)(2352KiB/10099msec) 00:23:31.648 slat (nsec): min=8064, max=99868, avg=22875.67, stdev=24656.89 00:23:31.648 clat (msec): min=195, max=476, avg=273.82, stdev=33.09 00:23:31.648 lat (msec): min=195, max=476, avg=273.84, stdev=33.10 00:23:31.648 clat percentiles (msec): 00:23:31.648 | 1.00th=[ 197], 5.00th=[ 220], 10.00th=[ 243], 20.00th=[ 257], 00:23:31.648 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:23:31.648 | 70.00th=[ 279], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 317], 00:23:31.648 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 477], 99.95th=[ 477], 00:23:31.648 | 99.99th=[ 477] 00:23:31.648 bw ( KiB/s): min= 128, max= 256, per=4.19%, avg=228.80, stdev=44.38, samples=20 00:23:31.648 iops : min= 32, max= 64, avg=57.20, stdev=11.10, samples=20 00:23:31.648 lat (msec) : 250=15.48%, 500=84.52% 00:23:31.648 cpu : usr=98.47%, sys=1.10%, ctx=17, majf=0, minf=35 00:23:31.648 IO depths : 1=1.9%, 2=4.3%, 4=13.3%, 8=69.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:23:31.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 complete : 0=0.0%, 4=90.6%, 8=3.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.648 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.648 filename1: (groupid=0, jobs=1): err= 0: pid=1285327: Tue Apr 16 12:51:28 2024 00:23:31.648 read: IOPS=44, BW=178KiB/s (182kB/s)(1792KiB/10077msec) 00:23:31.648 slat (nsec): min=7987, max=53011, avg=15571.66, stdev=8934.04 00:23:31.649 clat (msec): min=161, max=546, avg=359.74, stdev=80.93 00:23:31.649 lat (msec): min=161, max=546, avg=359.76, stdev=80.93 00:23:31.649 clat percentiles (msec): 00:23:31.649 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 271], 20.00th=[ 296], 00:23:31.649 | 30.00th=[ 334], 40.00th=[ 342], 50.00th=[ 380], 60.00th=[ 397], 00:23:31.649 | 70.00th=[ 414], 80.00th=[ 422], 90.00th=[ 430], 95.00th=[ 464], 00:23:31.649 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:23:31.649 | 99.99th=[ 550] 00:23:31.649 bw ( KiB/s): min= 128, max= 256, per=3.16%, avg=172.80, stdev=59.55, samples=20 00:23:31.649 iops : min= 32, max= 64, avg=43.20, stdev=14.89, samples=20 00:23:31.649 lat (msec) : 250=8.48%, 500=88.84%, 750=2.68% 00:23:31.649 cpu : usr=98.39%, sys=1.23%, ctx=17, majf=0, minf=43 00:23:31.649 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:23:31.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.649 filename1: (groupid=0, jobs=1): err= 0: pid=1285328: Tue Apr 16 12:51:28 2024 00:23:31.649 read: IOPS=65, BW=264KiB/s (270kB/s)(2664KiB/10109msec) 00:23:31.649 slat (usec): min=7, max=151, avg=19.05, stdev=23.51 00:23:31.649 clat (msec): min=44, max=513, avg=242.46, stdev=67.65 00:23:31.649 lat (msec): min=44, max=513, avg=242.48, stdev=67.67 00:23:31.649 clat percentiles (msec): 00:23:31.649 | 1.00th=[ 45], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 194], 00:23:31.649 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 257], 60.00th=[ 264], 00:23:31.649 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 279], 95.00th=[ 380], 00:23:31.649 | 99.00th=[ 409], 99.50th=[ 418], 99.90th=[ 514], 99.95th=[ 514], 00:23:31.649 | 99.99th=[ 514] 00:23:31.649 bw ( KiB/s): min= 144, max= 496, per=4.76%, avg=260.00, stdev=75.56, samples=20 00:23:31.649 iops : min= 36, max= 124, avg=65.00, stdev=18.89, samples=20 00:23:31.649 lat (msec) : 50=2.40%, 100=2.10%, 250=42.64%, 500=52.55%, 750=0.30% 00:23:31.649 cpu : usr=98.45%, sys=1.14%, ctx=18, majf=0, minf=57 00:23:31.649 IO depths : 1=0.5%, 2=5.3%, 4=20.6%, 8=61.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:31.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 complete : 0=0.0%, 4=93.0%, 8=1.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 issued rwts: total=666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.649 filename1: (groupid=0, jobs=1): err= 0: pid=1285329: Tue Apr 16 12:51:28 2024 00:23:31.649 read: IOPS=56, BW=227KiB/s (232kB/s)(2288KiB/10089msec) 00:23:31.649 slat (nsec): min=8278, max=95478, avg=21702.32, stdev=20834.06 00:23:31.649 clat (msec): min=183, max=423, avg=281.38, stdev=42.54 00:23:31.649 lat (msec): min=183, max=423, avg=281.40, stdev=42.55 00:23:31.649 clat percentiles (msec): 00:23:31.649 | 1.00th=[ 184], 5.00th=[ 222], 10.00th=[ 236], 20.00th=[ 257], 00:23:31.649 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:23:31.649 | 70.00th=[ 292], 80.00th=[ 309], 90.00th=[ 355], 95.00th=[ 380], 00:23:31.649 | 99.00th=[ 388], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:23:31.649 | 99.99th=[ 426] 00:23:31.649 bw ( KiB/s): min= 128, max= 256, per=4.08%, avg=222.40, stdev=50.57, samples=20 00:23:31.649 iops : min= 32, max= 64, avg=55.60, stdev=12.64, samples=20 00:23:31.649 lat (msec) : 250=15.38%, 500=84.62% 00:23:31.649 cpu : usr=98.25%, sys=1.33%, ctx=31, majf=0, minf=41 00:23:31.649 IO depths : 1=3.1%, 2=7.3%, 4=18.7%, 8=61.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:23:31.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 issued rwts: total=572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.649 filename1: (groupid=0, jobs=1): err= 0: pid=1285330: Tue Apr 16 12:51:28 2024 00:23:31.649 read: IOPS=59, BW=237KiB/s (242kB/s)(2392KiB/10109msec) 00:23:31.649 slat (nsec): min=6526, max=45852, avg=20380.33, stdev=5346.39 00:23:31.649 clat (msec): min=177, max=474, avg=269.95, stdev=41.54 00:23:31.649 lat (msec): min=177, max=474, avg=269.97, stdev=41.54 00:23:31.649 clat percentiles (msec): 00:23:31.649 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 220], 20.00th=[ 249], 00:23:31.649 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 275], 00:23:31.649 | 70.00th=[ 275], 80.00th=[ 296], 90.00th=[ 330], 95.00th=[ 351], 00:23:31.649 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 477], 99.95th=[ 477], 00:23:31.649 | 99.99th=[ 477] 00:23:31.649 bw ( KiB/s): min= 128, max= 272, per=4.26%, avg=232.80, stdev=44.80, samples=20 00:23:31.649 iops : min= 32, max= 68, avg=58.20, stdev=11.20, samples=20 00:23:31.649 lat (msec) : 250=22.41%, 500=77.59% 00:23:31.649 cpu : usr=97.96%, sys=1.47%, ctx=15, majf=0, minf=50 00:23:31.649 IO depths : 1=1.2%, 2=4.5%, 4=16.2%, 8=66.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:31.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 complete : 0=0.0%, 4=91.6%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.649 filename1: (groupid=0, jobs=1): err= 0: pid=1285331: Tue Apr 16 12:51:28 2024 00:23:31.649 read: IOPS=56, BW=227KiB/s (232kB/s)(2288KiB/10098msec) 00:23:31.649 slat (usec): min=6, max=103, avg=26.71, stdev=25.70 00:23:31.649 clat (msec): min=161, max=454, avg=281.23, stdev=43.13 00:23:31.649 lat (msec): min=161, max=454, avg=281.26, stdev=43.14 00:23:31.649 clat percentiles (msec): 00:23:31.649 | 1.00th=[ 163], 5.00th=[ 222], 10.00th=[ 234], 20.00th=[ 259], 00:23:31.649 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:23:31.649 | 70.00th=[ 296], 80.00th=[ 309], 90.00th=[ 330], 95.00th=[ 368], 00:23:31.649 | 99.00th=[ 430], 99.50th=[ 456], 99.90th=[ 456], 99.95th=[ 456], 00:23:31.649 | 99.99th=[ 456] 00:23:31.649 bw ( KiB/s): min= 128, max= 256, per=4.08%, avg=222.40, stdev=48.39, samples=20 00:23:31.649 iops : min= 32, max= 64, avg=55.60, stdev=12.10, samples=20 00:23:31.649 lat (msec) : 250=15.38%, 500=84.62% 00:23:31.649 cpu : usr=98.37%, sys=1.24%, ctx=15, majf=0, minf=32 00:23:31.649 IO depths : 1=2.4%, 2=6.5%, 4=18.7%, 8=62.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:23:31.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 issued rwts: total=572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.649 filename1: (groupid=0, jobs=1): err= 0: pid=1285332: Tue Apr 16 12:51:28 2024 00:23:31.649 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10079msec) 00:23:31.649 slat (usec): min=8, max=102, avg=18.29, stdev=12.64 00:23:31.649 clat (msec): min=186, max=428, avg=287.25, stdev=40.02 00:23:31.649 lat (msec): min=186, max=428, avg=287.27, stdev=40.02 00:23:31.649 clat percentiles (msec): 00:23:31.649 | 1.00th=[ 203], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 264], 00:23:31.649 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:23:31.649 | 70.00th=[ 292], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 359], 00:23:31.649 | 99.00th=[ 414], 99.50th=[ 418], 99.90th=[ 430], 99.95th=[ 430], 00:23:31.649 | 99.99th=[ 430] 00:23:31.649 bw ( KiB/s): min= 128, max= 256, per=3.99%, avg=217.60, stdev=55.28, samples=20 00:23:31.649 iops : min= 32, max= 64, avg=54.40, stdev=13.82, samples=20 00:23:31.649 lat (msec) : 250=7.86%, 500=92.14% 00:23:31.649 cpu : usr=98.50%, sys=1.11%, ctx=17, majf=0, minf=33 00:23:31.649 IO depths : 1=1.8%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:23:31.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.649 filename1: (groupid=0, jobs=1): err= 0: pid=1285333: Tue Apr 16 12:51:28 2024 00:23:31.649 read: IOPS=42, BW=172KiB/s (176kB/s)(1728KiB/10054msec) 00:23:31.649 slat (usec): min=8, max=104, avg=48.72, stdev=31.18 00:23:31.649 clat (msec): min=220, max=544, avg=371.94, stdev=55.26 00:23:31.649 lat (msec): min=220, max=544, avg=371.98, stdev=55.24 00:23:31.649 clat percentiles (msec): 00:23:31.649 | 1.00th=[ 275], 5.00th=[ 275], 10.00th=[ 292], 20.00th=[ 309], 00:23:31.649 | 30.00th=[ 342], 40.00th=[ 359], 50.00th=[ 393], 60.00th=[ 401], 00:23:31.649 | 70.00th=[ 401], 80.00th=[ 414], 90.00th=[ 422], 95.00th=[ 464], 00:23:31.649 | 99.00th=[ 510], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:23:31.649 | 99.99th=[ 542] 00:23:31.649 bw ( KiB/s): min= 128, max= 256, per=3.05%, avg=166.40, stdev=55.28, samples=20 00:23:31.649 iops : min= 32, max= 64, avg=41.60, stdev=13.82, samples=20 00:23:31.649 lat (msec) : 250=0.46%, 500=97.69%, 750=1.85% 00:23:31.649 cpu : usr=98.58%, sys=1.02%, ctx=14, majf=0, minf=41 00:23:31.649 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:23:31.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.649 filename1: (groupid=0, jobs=1): err= 0: pid=1285334: Tue Apr 16 12:51:28 2024 00:23:31.649 read: IOPS=57, BW=231KiB/s (237kB/s)(2336KiB/10099msec) 00:23:31.649 slat (usec): min=8, max=110, avg=23.87, stdev=24.44 00:23:31.649 clat (msec): min=162, max=477, avg=275.45, stdev=44.49 00:23:31.649 lat (msec): min=162, max=477, avg=275.48, stdev=44.50 00:23:31.649 clat percentiles (msec): 00:23:31.649 | 1.00th=[ 163], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 253], 00:23:31.649 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:23:31.649 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 338], 95.00th=[ 363], 00:23:31.649 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 477], 99.95th=[ 477], 00:23:31.649 | 99.99th=[ 477] 00:23:31.649 bw ( KiB/s): min= 128, max= 272, per=4.17%, avg=227.20, stdev=47.18, samples=20 00:23:31.649 iops : min= 32, max= 68, avg=56.80, stdev=11.79, samples=20 00:23:31.649 lat (msec) : 250=19.35%, 500=80.65% 00:23:31.649 cpu : usr=98.08%, sys=1.41%, ctx=33, majf=0, minf=55 00:23:31.649 IO depths : 1=1.2%, 2=3.6%, 4=13.2%, 8=70.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:23:31.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 complete : 0=0.0%, 4=90.6%, 8=4.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.649 issued rwts: total=584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.650 filename2: (groupid=0, jobs=1): err= 0: pid=1285335: Tue Apr 16 12:51:28 2024 00:23:31.650 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10110msec) 00:23:31.650 slat (nsec): min=7295, max=32706, avg=10346.56, stdev=2924.82 00:23:31.650 clat (msec): min=162, max=279, avg=241.08, stdev=33.89 00:23:31.650 lat (msec): min=162, max=279, avg=241.09, stdev=33.89 00:23:31.650 clat percentiles (msec): 00:23:31.650 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 199], 20.00th=[ 209], 00:23:31.650 | 30.00th=[ 218], 40.00th=[ 236], 50.00th=[ 257], 60.00th=[ 262], 00:23:31.650 | 70.00th=[ 266], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 279], 00:23:31.650 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:23:31.650 | 99.99th=[ 279] 00:23:31.650 bw ( KiB/s): min= 144, max= 384, per=4.79%, avg=261.60, stdev=48.77, samples=20 00:23:31.650 iops : min= 36, max= 96, avg=65.40, stdev=12.19, samples=20 00:23:31.650 lat (msec) : 250=45.07%, 500=54.93% 00:23:31.650 cpu : usr=98.37%, sys=1.23%, ctx=17, majf=0, minf=58 00:23:31.650 IO depths : 1=0.6%, 2=6.9%, 4=25.1%, 8=55.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:31.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.650 filename2: (groupid=0, jobs=1): err= 0: pid=1285336: Tue Apr 16 12:51:28 2024 00:23:31.650 read: IOPS=60, BW=244KiB/s (250kB/s)(2456KiB/10078msec) 00:23:31.650 slat (nsec): min=8046, max=94536, avg=15757.95, stdev=16437.28 00:23:31.650 clat (msec): min=180, max=347, avg=261.78, stdev=34.96 00:23:31.650 lat (msec): min=180, max=347, avg=261.80, stdev=34.97 00:23:31.650 clat percentiles (msec): 00:23:31.650 | 1.00th=[ 182], 5.00th=[ 203], 10.00th=[ 215], 20.00th=[ 236], 00:23:31.650 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 268], 60.00th=[ 271], 00:23:31.650 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 342], 00:23:31.650 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:23:31.650 | 99.99th=[ 347] 00:23:31.650 bw ( KiB/s): min= 128, max= 336, per=4.43%, avg=241.60, stdev=42.78, samples=20 00:23:31.650 iops : min= 32, max= 84, avg=60.40, stdev=10.69, samples=20 00:23:31.650 lat (msec) : 250=26.38%, 500=73.62% 00:23:31.650 cpu : usr=98.46%, sys=1.15%, ctx=19, majf=0, minf=45 00:23:31.650 IO depths : 1=1.1%, 2=2.8%, 4=11.1%, 8=73.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:31.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 complete : 0=0.0%, 4=90.1%, 8=4.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.650 filename2: (groupid=0, jobs=1): err= 0: pid=1285337: Tue Apr 16 12:51:28 2024 00:23:31.650 read: IOPS=60, BW=241KiB/s (247kB/s)(2440KiB/10107msec) 00:23:31.650 slat (nsec): min=5713, max=47012, avg=12833.81, stdev=6794.07 00:23:31.650 clat (msec): min=177, max=377, avg=263.89, stdev=34.12 00:23:31.650 lat (msec): min=177, max=377, avg=263.90, stdev=34.12 00:23:31.650 clat percentiles (msec): 00:23:31.650 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 220], 20.00th=[ 239], 00:23:31.650 | 30.00th=[ 255], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:23:31.650 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 309], 00:23:31.650 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:23:31.650 | 99.99th=[ 376] 00:23:31.650 bw ( KiB/s): min= 128, max= 256, per=4.35%, avg=237.60, stdev=36.81, samples=20 00:23:31.650 iops : min= 32, max= 64, avg=59.40, stdev= 9.20, samples=20 00:23:31.650 lat (msec) : 250=25.74%, 500=74.26% 00:23:31.650 cpu : usr=98.25%, sys=1.33%, ctx=21, majf=0, minf=79 00:23:31.650 IO depths : 1=1.0%, 2=3.0%, 4=12.0%, 8=72.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:23:31.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 complete : 0=0.0%, 4=90.3%, 8=4.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.650 filename2: (groupid=0, jobs=1): err= 0: pid=1285338: Tue Apr 16 12:51:28 2024 00:23:31.650 read: IOPS=59, BW=240KiB/s (246kB/s)(2424KiB/10106msec) 00:23:31.650 slat (nsec): min=5232, max=78034, avg=17249.25, stdev=9022.52 00:23:31.650 clat (msec): min=162, max=392, avg=266.51, stdev=38.88 00:23:31.650 lat (msec): min=162, max=392, avg=266.52, stdev=38.88 00:23:31.650 clat percentiles (msec): 00:23:31.650 | 1.00th=[ 163], 5.00th=[ 203], 10.00th=[ 209], 20.00th=[ 243], 00:23:31.650 | 30.00th=[ 257], 40.00th=[ 262], 50.00th=[ 271], 60.00th=[ 275], 00:23:31.650 | 70.00th=[ 279], 80.00th=[ 279], 90.00th=[ 309], 95.00th=[ 338], 00:23:31.650 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 393], 99.95th=[ 393], 00:23:31.650 | 99.99th=[ 393] 00:23:31.650 bw ( KiB/s): min= 128, max= 256, per=4.32%, avg=236.00, stdev=42.45, samples=20 00:23:31.650 iops : min= 32, max= 64, avg=59.00, stdev=10.61, samples=20 00:23:31.650 lat (msec) : 250=23.10%, 500=76.90% 00:23:31.650 cpu : usr=97.80%, sys=1.30%, ctx=38, majf=0, minf=46 00:23:31.650 IO depths : 1=2.1%, 2=8.4%, 4=25.1%, 8=54.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:23:31.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.650 filename2: (groupid=0, jobs=1): err= 0: pid=1285339: Tue Apr 16 12:51:28 2024 00:23:31.650 read: IOPS=60, BW=242KiB/s (247kB/s)(2440KiB/10099msec) 00:23:31.650 slat (nsec): min=7986, max=61791, avg=12736.10, stdev=6567.19 00:23:31.650 clat (msec): min=161, max=442, avg=264.35, stdev=41.34 00:23:31.650 lat (msec): min=161, max=442, avg=264.37, stdev=41.34 00:23:31.650 clat percentiles (msec): 00:23:31.650 | 1.00th=[ 163], 5.00th=[ 199], 10.00th=[ 209], 20.00th=[ 239], 00:23:31.650 | 30.00th=[ 255], 40.00th=[ 264], 50.00th=[ 268], 60.00th=[ 275], 00:23:31.650 | 70.00th=[ 279], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 317], 00:23:31.650 | 99.00th=[ 435], 99.50th=[ 443], 99.90th=[ 443], 99.95th=[ 443], 00:23:31.650 | 99.99th=[ 443] 00:23:31.650 bw ( KiB/s): min= 128, max= 272, per=4.35%, avg=237.60, stdev=38.24, samples=20 00:23:31.650 iops : min= 32, max= 68, avg=59.40, stdev= 9.56, samples=20 00:23:31.650 lat (msec) : 250=28.20%, 500=71.80% 00:23:31.650 cpu : usr=98.45%, sys=1.13%, ctx=23, majf=0, minf=50 00:23:31.650 IO depths : 1=1.5%, 2=3.6%, 4=16.7%, 8=67.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:23:31.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 complete : 0=0.0%, 4=92.5%, 8=2.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.650 filename2: (groupid=0, jobs=1): err= 0: pid=1285340: Tue Apr 16 12:51:28 2024 00:23:31.650 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10089msec) 00:23:31.650 slat (usec): min=5, max=137, avg=16.55, stdev=10.91 00:23:31.650 clat (msec): min=186, max=417, avg=287.53, stdev=38.94 00:23:31.650 lat (msec): min=186, max=417, avg=287.54, stdev=38.95 00:23:31.650 clat percentiles (msec): 00:23:31.650 | 1.00th=[ 205], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 264], 00:23:31.650 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:23:31.650 | 70.00th=[ 296], 80.00th=[ 317], 90.00th=[ 355], 95.00th=[ 359], 00:23:31.650 | 99.00th=[ 393], 99.50th=[ 414], 99.90th=[ 418], 99.95th=[ 418], 00:23:31.650 | 99.99th=[ 418] 00:23:31.650 bw ( KiB/s): min= 128, max= 256, per=3.99%, avg=217.60, stdev=53.29, samples=20 00:23:31.650 iops : min= 32, max= 64, avg=54.40, stdev=13.32, samples=20 00:23:31.650 lat (msec) : 250=10.00%, 500=90.00% 00:23:31.650 cpu : usr=97.67%, sys=1.48%, ctx=37, majf=0, minf=49 00:23:31.650 IO depths : 1=1.6%, 2=6.1%, 4=19.6%, 8=61.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:23:31.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 complete : 0=0.0%, 4=92.6%, 8=1.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.650 filename2: (groupid=0, jobs=1): err= 0: pid=1285341: Tue Apr 16 12:51:28 2024 00:23:31.650 read: IOPS=42, BW=172KiB/s (176kB/s)(1728KiB/10063msec) 00:23:31.650 slat (nsec): min=5032, max=73144, avg=12559.46, stdev=7332.56 00:23:31.650 clat (msec): min=270, max=537, avg=372.56, stdev=52.16 00:23:31.650 lat (msec): min=270, max=537, avg=372.57, stdev=52.15 00:23:31.650 clat percentiles (msec): 00:23:31.650 | 1.00th=[ 275], 5.00th=[ 288], 10.00th=[ 292], 20.00th=[ 317], 00:23:31.650 | 30.00th=[ 342], 40.00th=[ 359], 50.00th=[ 393], 60.00th=[ 401], 00:23:31.650 | 70.00th=[ 405], 80.00th=[ 414], 90.00th=[ 422], 95.00th=[ 422], 00:23:31.650 | 99.00th=[ 472], 99.50th=[ 518], 99.90th=[ 542], 99.95th=[ 542], 00:23:31.650 | 99.99th=[ 542] 00:23:31.650 bw ( KiB/s): min= 128, max= 256, per=3.05%, avg=166.40, stdev=56.96, samples=20 00:23:31.650 iops : min= 32, max= 64, avg=41.60, stdev=14.24, samples=20 00:23:31.650 lat (msec) : 500=99.07%, 750=0.93% 00:23:31.650 cpu : usr=97.84%, sys=1.45%, ctx=51, majf=0, minf=38 00:23:31.650 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:23:31.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.650 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.650 filename2: (groupid=0, jobs=1): err= 0: pid=1285342: Tue Apr 16 12:51:28 2024 00:23:31.650 read: IOPS=44, BW=178KiB/s (182kB/s)(1792KiB/10085msec) 00:23:31.650 slat (nsec): min=4980, max=38850, avg=14494.92, stdev=7311.09 00:23:31.650 clat (msec): min=173, max=554, avg=360.03, stdev=70.59 00:23:31.650 lat (msec): min=173, max=554, avg=360.04, stdev=70.59 00:23:31.650 clat percentiles (msec): 00:23:31.650 | 1.00th=[ 174], 5.00th=[ 218], 10.00th=[ 275], 20.00th=[ 296], 00:23:31.650 | 30.00th=[ 334], 40.00th=[ 342], 50.00th=[ 384], 60.00th=[ 393], 00:23:31.650 | 70.00th=[ 401], 80.00th=[ 422], 90.00th=[ 426], 95.00th=[ 451], 00:23:31.650 | 99.00th=[ 472], 99.50th=[ 542], 99.90th=[ 558], 99.95th=[ 558], 00:23:31.650 | 99.99th=[ 558] 00:23:31.650 bw ( KiB/s): min= 112, max= 256, per=3.16%, avg=172.80, stdev=62.85, samples=20 00:23:31.650 iops : min= 28, max= 64, avg=43.20, stdev=15.71, samples=20 00:23:31.650 lat (msec) : 250=7.59%, 500=91.52%, 750=0.89% 00:23:31.650 cpu : usr=97.37%, sys=1.71%, ctx=41, majf=0, minf=37 00:23:31.651 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:23:31.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.651 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.651 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:31.651 00:23:31.651 Run status group 0 (all jobs): 00:23:31.651 READ: bw=5445KiB/s (5576kB/s), 172KiB/s-272KiB/s (176kB/s-279kB/s), io=53.8MiB (56.4MB), run=10054-10110msec 00:23:31.651 12:51:29 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:31.651 12:51:29 -- target/dif.sh@43 -- # local sub 00:23:31.651 12:51:29 -- target/dif.sh@45 -- # for sub in "$@" 00:23:31.651 12:51:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:31.651 12:51:29 -- target/dif.sh@36 -- # local sub_id=0 00:23:31.651 12:51:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@45 -- # for sub in "$@" 00:23:31.651 12:51:29 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:31.651 12:51:29 -- target/dif.sh@36 -- # local sub_id=1 00:23:31.651 12:51:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@45 -- # for sub in "$@" 00:23:31.651 12:51:29 -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:31.651 12:51:29 -- target/dif.sh@36 -- # local sub_id=2 00:23:31.651 12:51:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@115 -- # NULL_DIF=1 00:23:31.651 12:51:29 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:31.651 12:51:29 -- target/dif.sh@115 -- # numjobs=2 00:23:31.651 12:51:29 -- target/dif.sh@115 -- # iodepth=8 00:23:31.651 12:51:29 -- target/dif.sh@115 -- # runtime=5 00:23:31.651 12:51:29 -- target/dif.sh@115 -- # files=1 00:23:31.651 12:51:29 -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:31.651 12:51:29 -- target/dif.sh@28 -- # local sub 00:23:31.651 12:51:29 -- target/dif.sh@30 -- # for sub in "$@" 00:23:31.651 12:51:29 -- target/dif.sh@31 -- # create_subsystem 0 00:23:31.651 12:51:29 -- target/dif.sh@18 -- # local sub_id=0 00:23:31.651 12:51:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 bdev_null0 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 [2024-04-16 12:51:29.094807] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@30 -- # for sub in "$@" 00:23:31.651 12:51:29 -- target/dif.sh@31 -- # create_subsystem 1 00:23:31.651 12:51:29 -- target/dif.sh@18 -- # local sub_id=1 00:23:31.651 12:51:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 bdev_null1 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.651 12:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.651 12:51:29 -- common/autotest_common.sh@10 -- # set +x 00:23:31.651 12:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.651 12:51:29 -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:31.651 12:51:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.651 12:51:29 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.651 12:51:29 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:31.651 12:51:29 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:31.651 12:51:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:31.651 12:51:29 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:31.651 12:51:29 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:31.651 12:51:29 -- target/dif.sh@82 -- # gen_fio_conf 00:23:31.651 12:51:29 -- nvmf/common.sh@521 -- # config=() 00:23:31.651 12:51:29 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:31.651 12:51:29 -- common/autotest_common.sh@1327 -- # shift 00:23:31.651 12:51:29 -- target/dif.sh@54 -- # local file 00:23:31.651 12:51:29 -- nvmf/common.sh@521 -- # local subsystem config 00:23:31.651 12:51:29 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:31.651 12:51:29 -- target/dif.sh@56 -- # cat 00:23:31.651 12:51:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.651 12:51:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:31.651 12:51:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:31.651 { 00:23:31.651 "params": { 00:23:31.651 "name": "Nvme$subsystem", 00:23:31.651 "trtype": "$TEST_TRANSPORT", 00:23:31.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.651 "adrfam": "ipv4", 00:23:31.651 "trsvcid": "$NVMF_PORT", 00:23:31.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.651 "hdgst": ${hdgst:-false}, 00:23:31.651 "ddgst": ${ddgst:-false} 00:23:31.651 }, 00:23:31.651 "method": "bdev_nvme_attach_controller" 00:23:31.651 } 00:23:31.651 EOF 00:23:31.651 )") 00:23:31.651 12:51:29 -- nvmf/common.sh@543 -- # cat 00:23:31.651 12:51:29 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:31.651 12:51:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:31.651 12:51:29 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:31.651 12:51:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:31.651 12:51:29 -- target/dif.sh@72 -- # (( file <= files )) 00:23:31.651 12:51:29 -- target/dif.sh@73 -- # cat 00:23:31.651 12:51:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:31.651 12:51:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:31.651 { 00:23:31.651 "params": { 00:23:31.651 "name": "Nvme$subsystem", 00:23:31.651 "trtype": "$TEST_TRANSPORT", 00:23:31.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.651 "adrfam": "ipv4", 00:23:31.651 "trsvcid": "$NVMF_PORT", 00:23:31.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.651 "hdgst": ${hdgst:-false}, 00:23:31.651 "ddgst": ${ddgst:-false} 00:23:31.651 }, 00:23:31.651 "method": "bdev_nvme_attach_controller" 00:23:31.651 } 00:23:31.651 EOF 00:23:31.651 )") 00:23:31.651 12:51:29 -- target/dif.sh@72 -- # (( file++ )) 00:23:31.651 12:51:29 -- target/dif.sh@72 -- # (( file <= files )) 00:23:31.651 12:51:29 -- nvmf/common.sh@543 -- # cat 00:23:31.651 12:51:29 -- nvmf/common.sh@545 -- # jq . 00:23:31.651 12:51:29 -- nvmf/common.sh@546 -- # IFS=, 00:23:31.651 12:51:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:31.651 "params": { 00:23:31.651 "name": "Nvme0", 00:23:31.651 "trtype": "tcp", 00:23:31.651 "traddr": "10.0.0.2", 00:23:31.651 "adrfam": "ipv4", 00:23:31.651 "trsvcid": "4420", 00:23:31.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:31.651 "hdgst": false, 00:23:31.651 "ddgst": false 00:23:31.651 }, 00:23:31.651 "method": "bdev_nvme_attach_controller" 00:23:31.651 },{ 00:23:31.651 "params": { 00:23:31.651 "name": "Nvme1", 00:23:31.651 "trtype": "tcp", 00:23:31.651 "traddr": "10.0.0.2", 00:23:31.651 "adrfam": "ipv4", 00:23:31.651 "trsvcid": "4420", 00:23:31.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.652 "hdgst": false, 00:23:31.652 "ddgst": false 00:23:31.652 }, 00:23:31.652 "method": "bdev_nvme_attach_controller" 00:23:31.652 }' 00:23:31.652 12:51:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:31.652 12:51:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:31.652 12:51:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.652 12:51:29 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:31.652 12:51:29 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:31.652 12:51:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:31.652 12:51:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:31.652 12:51:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:31.652 12:51:29 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:31.652 12:51:29 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.652 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:31.652 ... 00:23:31.652 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:31.652 ... 00:23:31.652 fio-3.35 00:23:31.652 Starting 4 threads 00:23:31.652 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.652 [2024-04-16 12:51:30.034389] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:31.652 [2024-04-16 12:51:30.034471] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:36.916 00:23:36.916 filename0: (groupid=0, jobs=1): err= 0: pid=1286723: Tue Apr 16 12:51:35 2024 00:23:36.916 read: IOPS=1764, BW=13.8MiB/s (14.5MB/s)(69.0MiB/5002msec) 00:23:36.916 slat (nsec): min=6146, max=67371, avg=17406.01, stdev=9287.87 00:23:36.916 clat (usec): min=949, max=8150, avg=4473.20, stdev=586.69 00:23:36.916 lat (usec): min=979, max=8163, avg=4490.61, stdev=587.21 00:23:36.916 clat percentiles (usec): 00:23:36.916 | 1.00th=[ 2802], 5.00th=[ 3458], 10.00th=[ 3785], 20.00th=[ 4228], 00:23:36.916 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:23:36.916 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5080], 00:23:36.916 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7767], 99.95th=[ 7832], 00:23:36.916 | 99.99th=[ 8160] 00:23:36.916 bw ( KiB/s): min=13696, max=14720, per=25.54%, avg=14142.22, stdev=380.77, samples=9 00:23:36.916 iops : min= 1712, max= 1840, avg=1767.78, stdev=47.60, samples=9 00:23:36.916 lat (usec) : 1000=0.01% 00:23:36.916 lat (msec) : 2=0.20%, 4=13.86%, 10=85.93% 00:23:36.916 cpu : usr=94.74%, sys=4.64%, ctx=9, majf=0, minf=85 00:23:36.916 IO depths : 1=0.1%, 2=13.5%, 4=59.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:36.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.916 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.916 issued rwts: total=8827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.916 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:36.916 filename0: (groupid=0, jobs=1): err= 0: pid=1286724: Tue Apr 16 12:51:35 2024 00:23:36.916 read: IOPS=1746, BW=13.6MiB/s (14.3MB/s)(68.2MiB/5001msec) 00:23:36.916 slat (nsec): min=5051, max=62379, avg=18777.67, stdev=9291.82 00:23:36.916 clat (usec): min=828, max=8297, avg=4512.91, stdev=617.33 00:23:36.916 lat (usec): min=842, max=8311, avg=4531.68, stdev=617.98 00:23:36.916 clat percentiles (usec): 00:23:36.916 | 1.00th=[ 2704], 5.00th=[ 3523], 10.00th=[ 3851], 20.00th=[ 4293], 00:23:36.916 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:23:36.916 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5407], 00:23:36.916 | 99.00th=[ 6849], 99.50th=[ 7373], 99.90th=[ 8029], 99.95th=[ 8160], 00:23:36.916 | 99.99th=[ 8291] 00:23:36.916 bw ( KiB/s): min=13600, max=14544, per=25.27%, avg=13991.11, stdev=276.91, samples=9 00:23:36.916 iops : min= 1700, max= 1818, avg=1748.89, stdev=34.61, samples=9 00:23:36.916 lat (usec) : 1000=0.01% 00:23:36.916 lat (msec) : 2=0.39%, 4=12.22%, 10=87.38% 00:23:36.916 cpu : usr=91.72%, sys=5.80%, ctx=360, majf=0, minf=38 00:23:36.916 IO depths : 1=0.1%, 2=14.6%, 4=58.1%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:36.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.916 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.916 issued rwts: total=8734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.916 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:36.916 filename1: (groupid=0, jobs=1): err= 0: pid=1286725: Tue Apr 16 12:51:35 2024 00:23:36.916 read: IOPS=1733, BW=13.5MiB/s (14.2MB/s)(67.8MiB/5002msec) 00:23:36.916 slat (nsec): min=3857, max=63925, avg=18361.24, stdev=8911.17 00:23:36.916 clat (usec): min=790, max=8638, avg=4548.87, stdev=639.73 00:23:36.916 lat (usec): min=803, max=8659, avg=4567.23, stdev=640.15 00:23:36.916 clat percentiles (usec): 00:23:36.916 | 1.00th=[ 2704], 5.00th=[ 3654], 10.00th=[ 3982], 20.00th=[ 4293], 00:23:36.916 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:23:36.916 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5604], 00:23:36.916 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8160], 99.95th=[ 8225], 00:23:36.916 | 99.99th=[ 8586] 00:23:36.916 bw ( KiB/s): min=13312, max=14352, per=25.05%, avg=13868.44, stdev=345.00, samples=9 00:23:36.916 iops : min= 1664, max= 1794, avg=1733.56, stdev=43.13, samples=9 00:23:36.916 lat (usec) : 1000=0.07% 00:23:36.916 lat (msec) : 2=0.39%, 4=10.19%, 10=89.35% 00:23:36.916 cpu : usr=95.28%, sys=4.12%, ctx=14, majf=0, minf=34 00:23:36.916 IO depths : 1=0.1%, 2=13.6%, 4=59.1%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:36.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.916 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.916 issued rwts: total=8673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.916 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:36.916 filename1: (groupid=0, jobs=1): err= 0: pid=1286726: Tue Apr 16 12:51:35 2024 00:23:36.916 read: IOPS=1676, BW=13.1MiB/s (13.7MB/s)(65.5MiB/5001msec) 00:23:36.916 slat (usec): min=4, max=266, avg=19.75, stdev= 9.21 00:23:36.916 clat (usec): min=972, max=46920, avg=4711.23, stdev=1474.91 00:23:36.916 lat (usec): min=992, max=46935, avg=4730.98, stdev=1474.40 00:23:36.916 clat percentiles (usec): 00:23:36.916 | 1.00th=[ 3294], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4359], 00:23:36.916 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4621], 00:23:36.916 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 6259], 00:23:36.916 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[ 8455], 99.95th=[46924], 00:23:36.916 | 99.99th=[46924] 00:23:36.916 bw ( KiB/s): min=11959, max=13872, per=24.13%, avg=13359.00, stdev=588.15, samples=9 00:23:36.916 iops : min= 1494, max= 1734, avg=1669.78, stdev=73.78, samples=9 00:23:36.916 lat (usec) : 1000=0.01% 00:23:36.916 lat (msec) : 2=0.12%, 4=6.36%, 10=93.42%, 50=0.10% 00:23:36.916 cpu : usr=93.86%, sys=5.02%, ctx=31, majf=0, minf=52 00:23:36.916 IO depths : 1=0.5%, 2=4.9%, 4=68.6%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:36.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.916 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.916 issued rwts: total=8383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.916 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:36.916 00:23:36.916 Run status group 0 (all jobs): 00:23:36.916 READ: bw=54.1MiB/s (56.7MB/s), 13.1MiB/s-13.8MiB/s (13.7MB/s-14.5MB/s), io=270MiB (284MB), run=5001-5002msec 00:23:36.916 12:51:35 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:36.916 12:51:35 -- target/dif.sh@43 -- # local sub 00:23:36.916 12:51:35 -- target/dif.sh@45 -- # for sub in "$@" 00:23:36.916 12:51:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:36.916 12:51:35 -- target/dif.sh@36 -- # local sub_id=0 00:23:36.916 12:51:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:36.916 12:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.916 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.916 12:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.916 12:51:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:36.916 12:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.916 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.916 12:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.916 12:51:35 -- target/dif.sh@45 -- # for sub in "$@" 00:23:36.916 12:51:35 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:36.916 12:51:35 -- target/dif.sh@36 -- # local sub_id=1 00:23:36.916 12:51:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.916 12:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.916 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.916 12:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.916 12:51:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:36.916 12:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.916 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.916 12:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.916 00:23:36.916 real 0m24.519s 00:23:36.916 user 4m34.954s 00:23:36.916 sys 0m6.304s 00:23:36.916 12:51:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:36.917 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.917 ************************************ 00:23:36.917 END TEST fio_dif_rand_params 00:23:36.917 ************************************ 00:23:36.917 12:51:35 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:36.917 12:51:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:36.917 12:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:36.917 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.917 ************************************ 00:23:36.917 START TEST fio_dif_digest 00:23:36.917 ************************************ 00:23:36.917 12:51:35 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:23:36.917 12:51:35 -- target/dif.sh@123 -- # local NULL_DIF 00:23:36.917 12:51:35 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:36.917 12:51:35 -- target/dif.sh@125 -- # local hdgst ddgst 00:23:36.917 12:51:35 -- target/dif.sh@127 -- # NULL_DIF=3 00:23:36.917 12:51:35 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:36.917 12:51:35 -- target/dif.sh@127 -- # numjobs=3 00:23:36.917 12:51:35 -- target/dif.sh@127 -- # iodepth=3 00:23:36.917 12:51:35 -- target/dif.sh@127 -- # runtime=10 00:23:36.917 12:51:35 -- target/dif.sh@128 -- # hdgst=true 00:23:36.917 12:51:35 -- target/dif.sh@128 -- # ddgst=true 00:23:36.917 12:51:35 -- target/dif.sh@130 -- # create_subsystems 0 00:23:36.917 12:51:35 -- target/dif.sh@28 -- # local sub 00:23:36.917 12:51:35 -- target/dif.sh@30 -- # for sub in "$@" 00:23:36.917 12:51:35 -- target/dif.sh@31 -- # create_subsystem 0 00:23:36.917 12:51:35 -- target/dif.sh@18 -- # local sub_id=0 00:23:36.917 12:51:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:36.917 12:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.917 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.917 bdev_null0 00:23:36.917 12:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.917 12:51:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:36.917 12:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.917 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.917 12:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.917 12:51:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:36.917 12:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.917 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.917 12:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.917 12:51:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.917 12:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.917 12:51:35 -- common/autotest_common.sh@10 -- # set +x 00:23:36.917 [2024-04-16 12:51:35.632445] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.917 12:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.917 12:51:35 -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:36.917 12:51:35 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:36.917 12:51:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:36.917 12:51:35 -- nvmf/common.sh@521 -- # config=() 00:23:36.917 12:51:35 -- nvmf/common.sh@521 -- # local subsystem config 00:23:36.917 12:51:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:36.917 12:51:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.917 12:51:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:36.917 { 00:23:36.917 "params": { 00:23:36.917 "name": "Nvme$subsystem", 00:23:36.917 "trtype": "$TEST_TRANSPORT", 00:23:36.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.917 "adrfam": "ipv4", 00:23:36.917 "trsvcid": "$NVMF_PORT", 00:23:36.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.917 "hdgst": ${hdgst:-false}, 00:23:36.917 "ddgst": ${ddgst:-false} 00:23:36.917 }, 00:23:36.917 "method": "bdev_nvme_attach_controller" 00:23:36.917 } 00:23:36.917 EOF 00:23:36.917 )") 00:23:36.917 12:51:35 -- target/dif.sh@82 -- # gen_fio_conf 00:23:36.917 12:51:35 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.917 12:51:35 -- target/dif.sh@54 -- # local file 00:23:36.917 12:51:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:36.917 12:51:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:36.917 12:51:35 -- target/dif.sh@56 -- # cat 00:23:36.917 12:51:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:36.917 12:51:35 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:36.917 12:51:35 -- common/autotest_common.sh@1327 -- # shift 00:23:36.917 12:51:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:36.917 12:51:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:36.917 12:51:35 -- nvmf/common.sh@543 -- # cat 00:23:36.917 12:51:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:36.917 12:51:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:36.917 12:51:35 -- target/dif.sh@72 -- # (( file <= files )) 00:23:36.917 12:51:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:36.917 12:51:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:36.917 12:51:35 -- nvmf/common.sh@545 -- # jq . 00:23:36.917 12:51:35 -- nvmf/common.sh@546 -- # IFS=, 00:23:36.917 12:51:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:36.917 "params": { 00:23:36.917 "name": "Nvme0", 00:23:36.917 "trtype": "tcp", 00:23:36.917 "traddr": "10.0.0.2", 00:23:36.917 "adrfam": "ipv4", 00:23:36.917 "trsvcid": "4420", 00:23:36.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.917 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:36.917 "hdgst": true, 00:23:36.917 "ddgst": true 00:23:36.917 }, 00:23:36.917 "method": "bdev_nvme_attach_controller" 00:23:36.917 }' 00:23:36.917 12:51:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:36.917 12:51:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:36.917 12:51:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:36.917 12:51:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:36.917 12:51:35 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:36.917 12:51:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:36.917 12:51:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:36.917 12:51:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:36.917 12:51:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:36.917 12:51:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.917 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:36.917 ... 00:23:36.917 fio-3.35 00:23:36.917 Starting 3 threads 00:23:36.917 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.488 [2024-04-16 12:51:36.392626] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:37.488 [2024-04-16 12:51:36.392698] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:49.682 00:23:49.682 filename0: (groupid=0, jobs=1): err= 0: pid=1287490: Tue Apr 16 12:51:46 2024 00:23:49.682 read: IOPS=131, BW=16.5MiB/s (17.3MB/s)(166MiB/10066msec) 00:23:49.682 slat (nsec): min=6289, max=75430, avg=17563.95, stdev=4660.81 00:23:49.682 clat (usec): min=8883, max=86424, avg=22716.76, stdev=16818.79 00:23:49.682 lat (usec): min=8902, max=86442, avg=22734.33, stdev=16818.85 00:23:49.682 clat percentiles (usec): 00:23:49.682 | 1.00th=[11207], 5.00th=[12518], 10.00th=[13042], 20.00th=[13435], 00:23:49.682 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14615], 60.00th=[15008], 00:23:49.682 | 70.00th=[15664], 80.00th=[47973], 90.00th=[55313], 95.00th=[58459], 00:23:49.682 | 99.00th=[62653], 99.50th=[64226], 99.90th=[67634], 99.95th=[86508], 00:23:49.682 | 99.99th=[86508] 00:23:49.682 bw ( KiB/s): min= 6400, max=28160, per=32.73%, avg=16934.40, stdev=10171.82, samples=20 00:23:49.682 iops : min= 50, max= 220, avg=132.30, stdev=79.47, samples=20 00:23:49.682 lat (msec) : 10=0.68%, 20=78.81%, 50=1.43%, 100=19.08% 00:23:49.682 cpu : usr=95.50%, sys=4.04%, ctx=24, majf=0, minf=155 00:23:49.682 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:49.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.683 issued rwts: total=1326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:49.683 filename0: (groupid=0, jobs=1): err= 0: pid=1287491: Tue Apr 16 12:51:46 2024 00:23:49.683 read: IOPS=137, BW=17.2MiB/s (18.0MB/s)(173MiB/10065msec) 00:23:49.683 slat (nsec): min=5782, max=43780, avg=16838.51, stdev=4059.99 00:23:49.683 clat (usec): min=10578, max=77829, avg=21774.64, stdev=16755.16 00:23:49.683 lat (usec): min=10595, max=77844, avg=21791.48, stdev=16755.46 00:23:49.683 clat percentiles (usec): 00:23:49.683 | 1.00th=[11207], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:23:49.683 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[14222], 00:23:49.683 | 70.00th=[14746], 80.00th=[39060], 90.00th=[54789], 95.00th=[56886], 00:23:49.683 | 99.00th=[61604], 99.50th=[63701], 99.90th=[66847], 99.95th=[78119], 00:23:49.683 | 99.99th=[78119] 00:23:49.683 bw ( KiB/s): min= 6656, max=30208, per=34.14%, avg=17666.80, stdev=10857.73, samples=20 00:23:49.683 iops : min= 52, max= 236, avg=138.00, stdev=84.80, samples=20 00:23:49.683 lat (msec) : 20=79.97%, 50=1.37%, 100=18.66% 00:23:49.683 cpu : usr=95.50%, sys=4.04%, ctx=30, majf=0, minf=123 00:23:49.683 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:49.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.683 issued rwts: total=1383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:49.683 filename0: (groupid=0, jobs=1): err= 0: pid=1287492: Tue Apr 16 12:51:46 2024 00:23:49.683 read: IOPS=135, BW=16.9MiB/s (17.7MB/s)(170MiB/10065msec) 00:23:49.683 slat (nsec): min=4201, max=48723, avg=20798.77, stdev=6735.48 00:23:49.683 clat (usec): min=9448, max=85366, avg=22136.23, stdev=16218.15 00:23:49.683 lat (usec): min=9461, max=85393, avg=22157.03, stdev=16221.78 00:23:49.683 clat percentiles (usec): 00:23:49.683 | 1.00th=[11076], 5.00th=[12125], 10.00th=[12649], 20.00th=[13304], 00:23:49.683 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14615], 00:23:49.683 | 70.00th=[15270], 80.00th=[46924], 90.00th=[53216], 95.00th=[56361], 00:23:49.683 | 99.00th=[60556], 99.50th=[62129], 99.90th=[69731], 99.95th=[85459], 00:23:49.683 | 99.99th=[85459] 00:23:49.683 bw ( KiB/s): min= 6656, max=28416, per=33.57%, avg=17369.60, stdev=10330.30, samples=20 00:23:49.683 iops : min= 52, max= 222, avg=135.70, stdev=80.71, samples=20 00:23:49.683 lat (msec) : 10=0.51%, 20=78.82%, 50=3.97%, 100=16.69% 00:23:49.683 cpu : usr=94.39%, sys=5.06%, ctx=20, majf=0, minf=59 00:23:49.683 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:49.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.683 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:49.683 00:23:49.683 Run status group 0 (all jobs): 00:23:49.683 READ: bw=50.5MiB/s (53.0MB/s), 16.5MiB/s-17.2MiB/s (17.3MB/s-18.0MB/s), io=509MiB (533MB), run=10065-10066msec 00:23:49.683 12:51:46 -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:49.683 12:51:46 -- target/dif.sh@43 -- # local sub 00:23:49.683 12:51:46 -- target/dif.sh@45 -- # for sub in "$@" 00:23:49.683 12:51:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:49.683 12:51:46 -- target/dif.sh@36 -- # local sub_id=0 00:23:49.683 12:51:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:49.683 12:51:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.683 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:23:49.683 12:51:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.683 12:51:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:49.683 12:51:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.683 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:23:49.683 12:51:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.683 00:23:49.683 real 0m11.261s 00:23:49.683 user 0m29.918s 00:23:49.683 sys 0m1.592s 00:23:49.683 12:51:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:49.683 12:51:46 -- common/autotest_common.sh@10 -- # set +x 00:23:49.683 ************************************ 00:23:49.683 END TEST fio_dif_digest 00:23:49.683 ************************************ 00:23:49.683 12:51:46 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:49.683 12:51:46 -- target/dif.sh@147 -- # nvmftestfini 00:23:49.683 12:51:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:49.683 12:51:46 -- nvmf/common.sh@117 -- # sync 00:23:49.683 12:51:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.683 12:51:46 -- nvmf/common.sh@120 -- # set +e 00:23:49.683 12:51:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.683 12:51:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.683 rmmod nvme_tcp 00:23:49.683 rmmod nvme_fabrics 00:23:49.683 rmmod nvme_keyring 00:23:49.683 12:51:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.683 12:51:46 -- nvmf/common.sh@124 -- # set -e 00:23:49.683 12:51:46 -- nvmf/common.sh@125 -- # return 0 00:23:49.683 12:51:46 -- nvmf/common.sh@478 -- # '[' -n 1280650 ']' 00:23:49.683 12:51:46 -- nvmf/common.sh@479 -- # killprocess 1280650 00:23:49.683 12:51:46 -- common/autotest_common.sh@936 -- # '[' -z 1280650 ']' 00:23:49.683 12:51:46 -- common/autotest_common.sh@940 -- # kill -0 1280650 00:23:49.683 12:51:46 -- common/autotest_common.sh@941 -- # uname 00:23:49.683 12:51:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:49.683 12:51:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1280650 00:23:49.683 12:51:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:49.683 12:51:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:49.683 12:51:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1280650' 00:23:49.683 killing process with pid 1280650 00:23:49.683 12:51:46 -- common/autotest_common.sh@955 -- # kill 1280650 00:23:49.683 12:51:46 -- common/autotest_common.sh@960 -- # wait 1280650 00:23:49.683 12:51:47 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:49.683 12:51:47 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:49.683 Waiting for block devices as requested 00:23:49.683 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:23:49.683 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:49.683 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:49.942 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:49.942 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:49.942 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:49.942 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:50.200 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:50.200 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:50.200 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:50.200 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:50.458 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:50.458 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:50.458 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:50.458 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:50.717 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:50.717 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:50.717 12:51:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:50.717 12:51:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:50.717 12:51:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.717 12:51:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.717 12:51:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.717 12:51:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:50.717 12:51:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.250 12:51:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:53.250 00:23:53.250 real 1m8.996s 00:23:53.250 user 6m34.287s 00:23:53.250 sys 0m18.027s 00:23:53.250 12:51:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:53.250 12:51:51 -- common/autotest_common.sh@10 -- # set +x 00:23:53.250 ************************************ 00:23:53.250 END TEST nvmf_dif 00:23:53.250 ************************************ 00:23:53.250 12:51:51 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:53.250 12:51:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:53.250 12:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:53.250 12:51:51 -- common/autotest_common.sh@10 -- # set +x 00:23:53.250 ************************************ 00:23:53.250 START TEST nvmf_abort_qd_sizes 00:23:53.250 ************************************ 00:23:53.250 12:51:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:53.250 * Looking for test storage... 00:23:53.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:53.250 12:51:51 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.250 12:51:51 -- nvmf/common.sh@7 -- # uname -s 00:23:53.250 12:51:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.250 12:51:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.250 12:51:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.250 12:51:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.250 12:51:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.250 12:51:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.250 12:51:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.250 12:51:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.250 12:51:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.250 12:51:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.250 12:51:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:53.250 12:51:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:53.250 12:51:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.250 12:51:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.250 12:51:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.250 12:51:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.250 12:51:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.250 12:51:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.250 12:51:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.250 12:51:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.250 12:51:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.250 12:51:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.250 12:51:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.250 12:51:51 -- paths/export.sh@5 -- # export PATH 00:23:53.250 12:51:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.250 12:51:51 -- nvmf/common.sh@47 -- # : 0 00:23:53.250 12:51:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:53.250 12:51:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:53.250 12:51:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.250 12:51:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.250 12:51:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.250 12:51:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:53.250 12:51:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:53.250 12:51:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:53.250 12:51:51 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:53.250 12:51:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:53.250 12:51:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.250 12:51:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:53.251 12:51:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:53.251 12:51:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:53.251 12:51:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.251 12:51:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:53.251 12:51:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.251 12:51:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:53.251 12:51:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:53.251 12:51:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:53.251 12:51:51 -- common/autotest_common.sh@10 -- # set +x 00:23:55.782 12:51:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:55.782 12:51:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:55.782 12:51:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:55.782 12:51:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:55.782 12:51:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:55.782 12:51:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:55.782 12:51:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:55.782 12:51:54 -- nvmf/common.sh@295 -- # net_devs=() 00:23:55.782 12:51:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:55.782 12:51:54 -- nvmf/common.sh@296 -- # e810=() 00:23:55.782 12:51:54 -- nvmf/common.sh@296 -- # local -ga e810 00:23:55.782 12:51:54 -- nvmf/common.sh@297 -- # x722=() 00:23:55.782 12:51:54 -- nvmf/common.sh@297 -- # local -ga x722 00:23:55.782 12:51:54 -- nvmf/common.sh@298 -- # mlx=() 00:23:55.782 12:51:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:55.782 12:51:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.782 12:51:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:55.782 12:51:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:55.782 12:51:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:55.782 12:51:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.782 12:51:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.0 (0x8086 - 0x159b)' 00:23:55.782 Found 0000:82:00.0 (0x8086 - 0x159b) 00:23:55.782 12:51:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.782 12:51:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:82:00.1 (0x8086 - 0x159b)' 00:23:55.782 Found 0000:82:00.1 (0x8086 - 0x159b) 00:23:55.782 12:51:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:55.782 12:51:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.782 12:51:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.782 12:51:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:55.782 12:51:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.782 12:51:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.0: cvl_0_0' 00:23:55.782 Found net devices under 0000:82:00.0: cvl_0_0 00:23:55.782 12:51:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.782 12:51:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.782 12:51:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.782 12:51:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:55.782 12:51:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.782 12:51:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:82:00.1: cvl_0_1' 00:23:55.782 Found net devices under 0000:82:00.1: cvl_0_1 00:23:55.782 12:51:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.782 12:51:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:55.782 12:51:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:55.782 12:51:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:55.782 12:51:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:55.782 12:51:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.782 12:51:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.782 12:51:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.782 12:51:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:55.782 12:51:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.782 12:51:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.782 12:51:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:55.782 12:51:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.782 12:51:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.782 12:51:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:55.782 12:51:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:55.782 12:51:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.782 12:51:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.782 12:51:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.782 12:51:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.782 12:51:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:55.782 12:51:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.782 12:51:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.782 12:51:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.782 12:51:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:55.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:23:55.782 00:23:55.782 --- 10.0.0.2 ping statistics --- 00:23:55.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.782 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:55.782 12:51:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:23:55.782 00:23:55.782 --- 10.0.0.1 ping statistics --- 00:23:55.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.782 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:55.782 12:51:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.782 12:51:54 -- nvmf/common.sh@411 -- # return 0 00:23:55.782 12:51:54 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:23:55.782 12:51:54 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:56.717 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:56.717 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:56.717 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:56.717 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:56.717 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:56.975 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:56.975 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:56.975 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:56.975 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:56.975 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:56.975 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:56.975 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:56.975 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:56.975 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:56.975 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:56.975 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:58.878 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:23:58.878 12:51:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.878 12:51:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:58.878 12:51:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:58.878 12:51:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.878 12:51:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:58.878 12:51:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:58.878 12:51:57 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:58.878 12:51:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:58.878 12:51:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:58.878 12:51:57 -- common/autotest_common.sh@10 -- # set +x 00:23:58.878 12:51:57 -- nvmf/common.sh@470 -- # nvmfpid=1292916 00:23:58.878 12:51:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:58.878 12:51:57 -- nvmf/common.sh@471 -- # waitforlisten 1292916 00:23:58.878 12:51:57 -- common/autotest_common.sh@817 -- # '[' -z 1292916 ']' 00:23:58.878 12:51:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.878 12:51:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:58.878 12:51:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.878 12:51:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:58.878 12:51:57 -- common/autotest_common.sh@10 -- # set +x 00:23:58.878 [2024-04-16 12:51:57.926595] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:23:58.878 [2024-04-16 12:51:57.926678] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.137 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.137 [2024-04-16 12:51:58.001999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.137 [2024-04-16 12:51:58.110640] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.137 [2024-04-16 12:51:58.110696] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.137 [2024-04-16 12:51:58.110712] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.137 [2024-04-16 12:51:58.110726] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.137 [2024-04-16 12:51:58.110738] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.137 [2024-04-16 12:51:58.110805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.137 [2024-04-16 12:51:58.110881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.137 [2024-04-16 12:51:58.110977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.137 [2024-04-16 12:51:58.110980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.104 12:51:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:00.104 12:51:58 -- common/autotest_common.sh@850 -- # return 0 00:24:00.104 12:51:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:00.104 12:51:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:00.104 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:00.104 12:51:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.104 12:51:58 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:00.104 12:51:58 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:00.104 12:51:58 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:00.104 12:51:58 -- scripts/common.sh@309 -- # local bdf bdfs 00:24:00.104 12:51:58 -- scripts/common.sh@310 -- # local nvmes 00:24:00.104 12:51:58 -- scripts/common.sh@312 -- # [[ -n 0000:81:00.0 ]] 00:24:00.104 12:51:58 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:24:00.104 12:51:58 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:00.104 12:51:58 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:81:00.0 ]] 00:24:00.104 12:51:58 -- scripts/common.sh@320 -- # uname -s 00:24:00.104 12:51:58 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:00.104 12:51:58 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:00.104 12:51:58 -- scripts/common.sh@325 -- # (( 1 )) 00:24:00.104 12:51:58 -- scripts/common.sh@326 -- # printf '%s\n' 0000:81:00.0 00:24:00.104 12:51:58 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:24:00.104 12:51:58 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:81:00.0 00:24:00.104 12:51:58 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:00.104 12:51:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:00.104 12:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:00.104 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:00.104 ************************************ 00:24:00.104 START TEST spdk_target_abort 00:24:00.104 ************************************ 00:24:00.104 12:51:58 -- common/autotest_common.sh@1111 -- # spdk_target 00:24:00.104 12:51:58 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:00.104 12:51:58 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:81:00.0 -b spdk_target 00:24:00.104 12:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.104 12:51:58 -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 spdk_targetn1 00:24:03.383 12:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.383 12:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.383 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 [2024-04-16 12:52:01.823955] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.383 12:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:03.383 12:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.383 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 12:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:03.383 12:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.383 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 12:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:03.383 12:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.383 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:24:03.383 [2024-04-16 12:52:01.856228] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.383 12:52:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:03.383 12:52:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:03.383 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.662 Initializing NVMe Controllers 00:24:06.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:06.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:06.662 Initialization complete. Launching workers. 00:24:06.662 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9546, failed: 0 00:24:06.662 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1370, failed to submit 8176 00:24:06.662 success 799, unsuccess 571, failed 0 00:24:06.662 12:52:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:06.662 12:52:05 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:06.662 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.943 Initializing NVMe Controllers 00:24:09.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:09.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:09.943 Initialization complete. Launching workers. 00:24:09.943 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8486, failed: 0 00:24:09.943 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 7233 00:24:09.943 success 335, unsuccess 918, failed 0 00:24:09.943 12:52:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:09.943 12:52:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:09.943 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.224 Initializing NVMe Controllers 00:24:13.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:13.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:13.224 Initialization complete. Launching workers. 00:24:13.224 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31202, failed: 0 00:24:13.224 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2786, failed to submit 28416 00:24:13.224 success 553, unsuccess 2233, failed 0 00:24:13.224 12:52:11 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:13.224 12:52:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.224 12:52:11 -- common/autotest_common.sh@10 -- # set +x 00:24:13.224 12:52:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.224 12:52:11 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:13.224 12:52:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.224 12:52:11 -- common/autotest_common.sh@10 -- # set +x 00:24:15.157 12:52:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.157 12:52:14 -- target/abort_qd_sizes.sh@61 -- # killprocess 1292916 00:24:15.157 12:52:14 -- common/autotest_common.sh@936 -- # '[' -z 1292916 ']' 00:24:15.157 12:52:14 -- common/autotest_common.sh@940 -- # kill -0 1292916 00:24:15.157 12:52:14 -- common/autotest_common.sh@941 -- # uname 00:24:15.157 12:52:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:15.157 12:52:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1292916 00:24:15.157 12:52:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:15.157 12:52:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:15.157 12:52:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1292916' 00:24:15.157 killing process with pid 1292916 00:24:15.157 12:52:14 -- common/autotest_common.sh@955 -- # kill 1292916 00:24:15.157 12:52:14 -- common/autotest_common.sh@960 -- # wait 1292916 00:24:15.416 00:24:15.416 real 0m15.341s 00:24:15.416 user 1m0.674s 00:24:15.416 sys 0m2.962s 00:24:15.416 12:52:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:15.416 12:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:15.416 ************************************ 00:24:15.416 END TEST spdk_target_abort 00:24:15.416 ************************************ 00:24:15.416 12:52:14 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:15.416 12:52:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.416 12:52:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.416 12:52:14 -- common/autotest_common.sh@10 -- # set +x 00:24:15.416 ************************************ 00:24:15.416 START TEST kernel_target_abort 00:24:15.416 ************************************ 00:24:15.416 12:52:14 -- common/autotest_common.sh@1111 -- # kernel_target 00:24:15.416 12:52:14 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:15.416 12:52:14 -- nvmf/common.sh@717 -- # local ip 00:24:15.416 12:52:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.416 12:52:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.416 12:52:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.416 12:52:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.416 12:52:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.416 12:52:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.416 12:52:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.416 12:52:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.416 12:52:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.416 12:52:14 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:15.416 12:52:14 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:15.416 12:52:14 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:15.416 12:52:14 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:15.416 12:52:14 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:15.416 12:52:14 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:15.416 12:52:14 -- nvmf/common.sh@628 -- # local block nvme 00:24:15.416 12:52:14 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:15.416 12:52:14 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:15.416 12:52:14 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:15.416 12:52:14 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:16.788 Waiting for block devices as requested 00:24:16.788 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:24:17.045 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:17.045 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:17.302 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:17.302 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:17.302 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:17.302 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:17.560 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:17.560 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:17.560 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:17.560 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:17.818 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:17.819 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:17.819 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:17.819 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:18.077 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:18.077 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:18.077 12:52:17 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:18.077 12:52:17 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:18.077 12:52:17 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:18.077 12:52:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:18.077 12:52:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:18.077 12:52:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:18.077 12:52:17 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:18.077 12:52:17 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:18.077 12:52:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:18.334 No valid GPT data, bailing 00:24:18.334 12:52:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:18.334 12:52:17 -- scripts/common.sh@391 -- # pt= 00:24:18.334 12:52:17 -- scripts/common.sh@392 -- # return 1 00:24:18.334 12:52:17 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:18.334 12:52:17 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:18.334 12:52:17 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:18.334 12:52:17 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:18.334 12:52:17 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:18.334 12:52:17 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:18.334 12:52:17 -- nvmf/common.sh@656 -- # echo 1 00:24:18.334 12:52:17 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:18.334 12:52:17 -- nvmf/common.sh@658 -- # echo 1 00:24:18.334 12:52:17 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:18.335 12:52:17 -- nvmf/common.sh@661 -- # echo tcp 00:24:18.335 12:52:17 -- nvmf/common.sh@662 -- # echo 4420 00:24:18.335 12:52:17 -- nvmf/common.sh@663 -- # echo ipv4 00:24:18.335 12:52:17 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:18.335 12:52:17 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 10.0.0.1 -t tcp -s 4420 00:24:18.335 00:24:18.335 Discovery Log Number of Records 2, Generation counter 2 00:24:18.335 =====Discovery Log Entry 0====== 00:24:18.335 trtype: tcp 00:24:18.335 adrfam: ipv4 00:24:18.335 subtype: current discovery subsystem 00:24:18.335 treq: not specified, sq flow control disable supported 00:24:18.335 portid: 1 00:24:18.335 trsvcid: 4420 00:24:18.335 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:18.335 traddr: 10.0.0.1 00:24:18.335 eflags: none 00:24:18.335 sectype: none 00:24:18.335 =====Discovery Log Entry 1====== 00:24:18.335 trtype: tcp 00:24:18.335 adrfam: ipv4 00:24:18.335 subtype: nvme subsystem 00:24:18.335 treq: not specified, sq flow control disable supported 00:24:18.335 portid: 1 00:24:18.335 trsvcid: 4420 00:24:18.335 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:18.335 traddr: 10.0.0.1 00:24:18.335 eflags: none 00:24:18.335 sectype: none 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:18.335 12:52:17 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:18.335 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.615 Initializing NVMe Controllers 00:24:21.615 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:21.615 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:21.615 Initialization complete. Launching workers. 00:24:21.615 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35515, failed: 0 00:24:21.615 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35515, failed to submit 0 00:24:21.615 success 0, unsuccess 35515, failed 0 00:24:21.615 12:52:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:21.615 12:52:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:21.615 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.894 Initializing NVMe Controllers 00:24:24.894 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:24.894 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:24.894 Initialization complete. Launching workers. 00:24:24.894 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70772, failed: 0 00:24:24.894 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17858, failed to submit 52914 00:24:24.894 success 0, unsuccess 17858, failed 0 00:24:24.894 12:52:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:24.894 12:52:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:24.894 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.180 Initializing NVMe Controllers 00:24:28.180 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:28.180 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:28.180 Initialization complete. Launching workers. 00:24:28.180 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64943, failed: 0 00:24:28.180 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16210, failed to submit 48733 00:24:28.180 success 0, unsuccess 16210, failed 0 00:24:28.180 12:52:26 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:28.180 12:52:26 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:28.180 12:52:26 -- nvmf/common.sh@675 -- # echo 0 00:24:28.180 12:52:26 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:28.180 12:52:26 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:28.180 12:52:26 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:28.180 12:52:26 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:28.180 12:52:26 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:28.180 12:52:26 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:28.180 12:52:26 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:29.116 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:29.116 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:29.116 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:29.116 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:29.116 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:29.116 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:29.116 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:29.116 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:29.116 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:29.116 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:29.116 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:29.116 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:29.116 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:29.116 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:29.116 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:29.116 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:31.024 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:24:31.024 00:24:31.024 real 0m15.598s 00:24:31.024 user 0m5.467s 00:24:31.024 sys 0m3.654s 00:24:31.024 12:52:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:31.024 12:52:30 -- common/autotest_common.sh@10 -- # set +x 00:24:31.024 ************************************ 00:24:31.024 END TEST kernel_target_abort 00:24:31.024 ************************************ 00:24:31.024 12:52:30 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:31.024 12:52:30 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:31.024 12:52:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:31.024 12:52:30 -- nvmf/common.sh@117 -- # sync 00:24:31.024 12:52:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.024 12:52:30 -- nvmf/common.sh@120 -- # set +e 00:24:31.024 12:52:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.024 12:52:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.024 rmmod nvme_tcp 00:24:31.024 rmmod nvme_fabrics 00:24:31.283 rmmod nvme_keyring 00:24:31.283 12:52:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.283 12:52:30 -- nvmf/common.sh@124 -- # set -e 00:24:31.283 12:52:30 -- nvmf/common.sh@125 -- # return 0 00:24:31.283 12:52:30 -- nvmf/common.sh@478 -- # '[' -n 1292916 ']' 00:24:31.283 12:52:30 -- nvmf/common.sh@479 -- # killprocess 1292916 00:24:31.283 12:52:30 -- common/autotest_common.sh@936 -- # '[' -z 1292916 ']' 00:24:31.283 12:52:30 -- common/autotest_common.sh@940 -- # kill -0 1292916 00:24:31.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1292916) - No such process 00:24:31.283 12:52:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1292916 is not found' 00:24:31.283 Process with pid 1292916 is not found 00:24:31.283 12:52:30 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:24:31.283 12:52:30 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:32.659 Waiting for block devices as requested 00:24:32.659 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:24:32.659 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:32.659 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:32.659 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:32.918 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:32.918 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:32.918 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:32.918 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:33.176 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:33.176 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:33.176 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:33.176 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:33.434 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:33.434 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:33.434 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:33.434 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:33.692 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:33.692 12:52:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:33.692 12:52:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:33.692 12:52:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.692 12:52:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.692 12:52:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.692 12:52:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:33.692 12:52:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.225 12:52:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.225 00:24:36.225 real 0m42.795s 00:24:36.225 user 1m8.713s 00:24:36.225 sys 0m10.542s 00:24:36.225 12:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:36.225 12:52:34 -- common/autotest_common.sh@10 -- # set +x 00:24:36.225 ************************************ 00:24:36.225 END TEST nvmf_abort_qd_sizes 00:24:36.225 ************************************ 00:24:36.225 12:52:34 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:24:36.225 12:52:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:36.225 12:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:36.225 12:52:34 -- common/autotest_common.sh@10 -- # set +x 00:24:36.225 ************************************ 00:24:36.225 START TEST keyring_file 00:24:36.225 ************************************ 00:24:36.225 12:52:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:24:36.225 * Looking for test storage... 00:24:36.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:24:36.225 12:52:34 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:24:36.225 12:52:34 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.225 12:52:34 -- nvmf/common.sh@7 -- # uname -s 00:24:36.225 12:52:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.225 12:52:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.225 12:52:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.225 12:52:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.225 12:52:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.225 12:52:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.225 12:52:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.225 12:52:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.225 12:52:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.225 12:52:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.225 12:52:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:36.225 12:52:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:36.225 12:52:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.225 12:52:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.225 12:52:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.225 12:52:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.225 12:52:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.225 12:52:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.225 12:52:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.225 12:52:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.225 12:52:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.225 12:52:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.225 12:52:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.225 12:52:34 -- paths/export.sh@5 -- # export PATH 00:24:36.225 12:52:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.225 12:52:34 -- nvmf/common.sh@47 -- # : 0 00:24:36.225 12:52:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.225 12:52:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.225 12:52:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.225 12:52:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.225 12:52:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.225 12:52:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.225 12:52:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.225 12:52:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.225 12:52:34 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:36.225 12:52:34 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:36.225 12:52:34 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:36.225 12:52:34 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:36.225 12:52:34 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:36.225 12:52:34 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:36.225 12:52:34 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:36.225 12:52:34 -- keyring/common.sh@15 -- # local name key digest path 00:24:36.225 12:52:34 -- keyring/common.sh@17 -- # name=key0 00:24:36.225 12:52:34 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:36.225 12:52:34 -- keyring/common.sh@17 -- # digest=0 00:24:36.225 12:52:34 -- keyring/common.sh@18 -- # mktemp 00:24:36.225 12:52:34 -- keyring/common.sh@18 -- # path=/tmp/tmp.fnPHm5T9ZS 00:24:36.225 12:52:34 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:36.225 12:52:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:36.225 12:52:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.225 12:52:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:36.225 12:52:34 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:36.225 12:52:34 -- nvmf/common.sh@693 -- # digest=0 00:24:36.225 12:52:34 -- nvmf/common.sh@694 -- # python - 00:24:36.225 12:52:34 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fnPHm5T9ZS 00:24:36.225 12:52:34 -- keyring/common.sh@23 -- # echo /tmp/tmp.fnPHm5T9ZS 00:24:36.225 12:52:34 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fnPHm5T9ZS 00:24:36.225 12:52:34 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:36.225 12:52:34 -- keyring/common.sh@15 -- # local name key digest path 00:24:36.225 12:52:34 -- keyring/common.sh@17 -- # name=key1 00:24:36.225 12:52:34 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:36.225 12:52:34 -- keyring/common.sh@17 -- # digest=0 00:24:36.225 12:52:34 -- keyring/common.sh@18 -- # mktemp 00:24:36.225 12:52:34 -- keyring/common.sh@18 -- # path=/tmp/tmp.KF7IxFtwZj 00:24:36.226 12:52:34 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:36.226 12:52:34 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:36.226 12:52:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:36.226 12:52:34 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:36.226 12:52:34 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:24:36.226 12:52:34 -- nvmf/common.sh@693 -- # digest=0 00:24:36.226 12:52:34 -- nvmf/common.sh@694 -- # python - 00:24:36.226 12:52:34 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KF7IxFtwZj 00:24:36.226 12:52:34 -- keyring/common.sh@23 -- # echo /tmp/tmp.KF7IxFtwZj 00:24:36.226 12:52:34 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.KF7IxFtwZj 00:24:36.226 12:52:34 -- keyring/file.sh@30 -- # tgtpid=1299273 00:24:36.226 12:52:34 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:24:36.226 12:52:34 -- keyring/file.sh@32 -- # waitforlisten 1299273 00:24:36.226 12:52:34 -- common/autotest_common.sh@817 -- # '[' -z 1299273 ']' 00:24:36.226 12:52:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.226 12:52:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:36.226 12:52:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.226 12:52:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:36.226 12:52:34 -- common/autotest_common.sh@10 -- # set +x 00:24:36.226 [2024-04-16 12:52:35.026990] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:24:36.226 [2024-04-16 12:52:35.027070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299273 ] 00:24:36.226 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.226 [2024-04-16 12:52:35.098657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.226 [2024-04-16 12:52:35.213533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.159 12:52:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:37.159 12:52:35 -- common/autotest_common.sh@850 -- # return 0 00:24:37.159 12:52:35 -- keyring/file.sh@33 -- # rpc_cmd 00:24:37.159 12:52:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.159 12:52:35 -- common/autotest_common.sh@10 -- # set +x 00:24:37.159 [2024-04-16 12:52:35.964017] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.159 null0 00:24:37.159 [2024-04-16 12:52:35.996081] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.159 [2024-04-16 12:52:35.996969] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:37.159 [2024-04-16 12:52:36.004093] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:37.159 12:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.159 12:52:36 -- keyring/file.sh@43 -- # bperfpid=1299408 00:24:37.159 12:52:36 -- keyring/file.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:37.159 12:52:36 -- keyring/file.sh@45 -- # waitforlisten 1299408 /var/tmp/bperf.sock 00:24:37.159 12:52:36 -- common/autotest_common.sh@817 -- # '[' -z 1299408 ']' 00:24:37.159 12:52:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:37.159 12:52:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:37.159 12:52:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:37.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:37.159 12:52:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:37.159 12:52:36 -- common/autotest_common.sh@10 -- # set +x 00:24:37.159 [2024-04-16 12:52:36.049052] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:24:37.159 [2024-04-16 12:52:36.049119] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299408 ] 00:24:37.159 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.159 [2024-04-16 12:52:36.118775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.420 [2024-04-16 12:52:36.236980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.985 12:52:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:37.985 12:52:37 -- common/autotest_common.sh@850 -- # return 0 00:24:37.985 12:52:37 -- keyring/file.sh@46 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fnPHm5T9ZS 00:24:37.985 12:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fnPHm5T9ZS 00:24:38.243 12:52:37 -- keyring/file.sh@47 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.KF7IxFtwZj 00:24:38.243 12:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.KF7IxFtwZj 00:24:38.501 12:52:37 -- keyring/file.sh@48 -- # get_key key0 00:24:38.501 12:52:37 -- keyring/file.sh@48 -- # jq -r .path 00:24:38.501 12:52:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:38.501 12:52:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:38.501 12:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:38.759 12:52:37 -- keyring/file.sh@48 -- # [[ /tmp/tmp.fnPHm5T9ZS == \/\t\m\p\/\t\m\p\.\f\n\P\H\m\5\T\9\Z\S ]] 00:24:38.759 12:52:37 -- keyring/file.sh@49 -- # get_key key1 00:24:38.759 12:52:37 -- keyring/file.sh@49 -- # jq -r .path 00:24:38.759 12:52:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:38.759 12:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:38.759 12:52:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:39.017 12:52:37 -- keyring/file.sh@49 -- # [[ /tmp/tmp.KF7IxFtwZj == \/\t\m\p\/\t\m\p\.\K\F\7\I\x\F\t\w\Z\j ]] 00:24:39.017 12:52:37 -- keyring/file.sh@50 -- # get_refcnt key0 00:24:39.017 12:52:37 -- keyring/common.sh@12 -- # get_key key0 00:24:39.017 12:52:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:39.017 12:52:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:39.017 12:52:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:39.017 12:52:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:39.275 12:52:38 -- keyring/file.sh@50 -- # (( 1 == 1 )) 00:24:39.275 12:52:38 -- keyring/file.sh@51 -- # get_refcnt key1 00:24:39.275 12:52:38 -- keyring/common.sh@12 -- # get_key key1 00:24:39.275 12:52:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:39.275 12:52:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:39.275 12:52:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:39.275 12:52:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:39.533 12:52:38 -- keyring/file.sh@51 -- # (( 1 == 1 )) 00:24:39.533 12:52:38 -- keyring/file.sh@54 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:39.533 12:52:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:39.791 [2024-04-16 12:52:38.685499] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.791 nvme0n1 00:24:39.791 12:52:38 -- keyring/file.sh@56 -- # get_refcnt key0 00:24:39.791 12:52:38 -- keyring/common.sh@12 -- # get_key key0 00:24:39.791 12:52:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:39.791 12:52:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:39.791 12:52:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:39.791 12:52:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:40.050 12:52:39 -- keyring/file.sh@56 -- # (( 2 == 2 )) 00:24:40.050 12:52:39 -- keyring/file.sh@57 -- # get_refcnt key1 00:24:40.050 12:52:39 -- keyring/common.sh@12 -- # get_key key1 00:24:40.050 12:52:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:40.050 12:52:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:40.050 12:52:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:40.050 12:52:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:40.308 12:52:39 -- keyring/file.sh@57 -- # (( 1 == 1 )) 00:24:40.308 12:52:39 -- keyring/file.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:40.566 Running I/O for 1 seconds... 00:24:41.500 00:24:41.500 Latency(us) 00:24:41.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.500 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:41.500 nvme0n1 : 1.01 5493.73 21.46 0.00 0.00 23205.93 5121.52 36117.62 00:24:41.500 =================================================================================================================== 00:24:41.500 Total : 5493.73 21.46 0.00 0.00 23205.93 5121.52 36117.62 00:24:41.500 0 00:24:41.500 12:52:40 -- keyring/file.sh@61 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:41.500 12:52:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:41.758 12:52:40 -- keyring/file.sh@62 -- # get_refcnt key0 00:24:41.758 12:52:40 -- keyring/common.sh@12 -- # get_key key0 00:24:41.758 12:52:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:41.758 12:52:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:41.758 12:52:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:41.758 12:52:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:42.016 12:52:40 -- keyring/file.sh@62 -- # (( 1 == 1 )) 00:24:42.016 12:52:40 -- keyring/file.sh@63 -- # get_refcnt key1 00:24:42.016 12:52:40 -- keyring/common.sh@12 -- # get_key key1 00:24:42.016 12:52:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:42.016 12:52:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:42.016 12:52:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:42.016 12:52:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:42.274 12:52:41 -- keyring/file.sh@63 -- # (( 1 == 1 )) 00:24:42.274 12:52:41 -- keyring/file.sh@66 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:42.274 12:52:41 -- common/autotest_common.sh@638 -- # local es=0 00:24:42.274 12:52:41 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:42.274 12:52:41 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:42.274 12:52:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:42.274 12:52:41 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:42.274 12:52:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:42.274 12:52:41 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:42.274 12:52:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:42.531 [2024-04-16 12:52:41.387341] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:42.531 [2024-04-16 12:52:41.387865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d58a40 (107): Transport endpoint is not connected 00:24:42.531 [2024-04-16 12:52:41.388849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d58a40 (9): Bad file descriptor 00:24:42.531 [2024-04-16 12:52:41.389844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.531 [2024-04-16 12:52:41.389885] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:42.531 [2024-04-16 12:52:41.389901] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.531 request: 00:24:42.531 { 00:24:42.531 "name": "nvme0", 00:24:42.531 "trtype": "tcp", 00:24:42.531 "traddr": "127.0.0.1", 00:24:42.531 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:42.531 "adrfam": "ipv4", 00:24:42.531 "trsvcid": "4420", 00:24:42.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:42.531 "psk": "key1", 00:24:42.531 "method": "bdev_nvme_attach_controller", 00:24:42.531 "req_id": 1 00:24:42.531 } 00:24:42.531 Got JSON-RPC error response 00:24:42.531 response: 00:24:42.531 { 00:24:42.531 "code": -32602, 00:24:42.531 "message": "Invalid parameters" 00:24:42.531 } 00:24:42.531 12:52:41 -- common/autotest_common.sh@641 -- # es=1 00:24:42.531 12:52:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:42.531 12:52:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:42.531 12:52:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:42.531 12:52:41 -- keyring/file.sh@68 -- # get_refcnt key0 00:24:42.531 12:52:41 -- keyring/common.sh@12 -- # get_key key0 00:24:42.531 12:52:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:42.531 12:52:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:42.531 12:52:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:42.531 12:52:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:42.788 12:52:41 -- keyring/file.sh@68 -- # (( 1 == 1 )) 00:24:42.788 12:52:41 -- keyring/file.sh@69 -- # get_refcnt key1 00:24:42.788 12:52:41 -- keyring/common.sh@12 -- # get_key key1 00:24:42.788 12:52:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:42.788 12:52:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:42.788 12:52:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:42.788 12:52:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.045 12:52:41 -- keyring/file.sh@69 -- # (( 1 == 1 )) 00:24:43.045 12:52:41 -- keyring/file.sh@72 -- # bperf_cmd keyring_file_remove_key key0 00:24:43.045 12:52:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:43.303 12:52:42 -- keyring/file.sh@73 -- # bperf_cmd keyring_file_remove_key key1 00:24:43.303 12:52:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:43.561 12:52:42 -- keyring/file.sh@74 -- # bperf_cmd keyring_get_keys 00:24:43.561 12:52:42 -- keyring/file.sh@74 -- # jq length 00:24:43.561 12:52:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.818 12:52:42 -- keyring/file.sh@74 -- # (( 0 == 0 )) 00:24:43.818 12:52:42 -- keyring/file.sh@77 -- # chmod 0660 /tmp/tmp.fnPHm5T9ZS 00:24:43.818 12:52:42 -- keyring/file.sh@78 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fnPHm5T9ZS 00:24:43.818 12:52:42 -- common/autotest_common.sh@638 -- # local es=0 00:24:43.818 12:52:42 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fnPHm5T9ZS 00:24:43.818 12:52:42 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:43.818 12:52:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:43.818 12:52:42 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:43.818 12:52:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:43.818 12:52:42 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fnPHm5T9ZS 00:24:43.818 12:52:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fnPHm5T9ZS 00:24:43.818 [2024-04-16 12:52:42.851695] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fnPHm5T9ZS': 0100660 00:24:43.818 [2024-04-16 12:52:42.851730] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:43.818 request: 00:24:43.818 { 00:24:43.818 "name": "key0", 00:24:43.818 "path": "/tmp/tmp.fnPHm5T9ZS", 00:24:43.818 "method": "keyring_file_add_key", 00:24:43.819 "req_id": 1 00:24:43.819 } 00:24:43.819 Got JSON-RPC error response 00:24:43.819 response: 00:24:43.819 { 00:24:43.819 "code": -1, 00:24:43.819 "message": "Operation not permitted" 00:24:43.819 } 00:24:43.819 12:52:42 -- common/autotest_common.sh@641 -- # es=1 00:24:43.819 12:52:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:43.819 12:52:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:43.819 12:52:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:43.819 12:52:42 -- keyring/file.sh@81 -- # chmod 0600 /tmp/tmp.fnPHm5T9ZS 00:24:43.819 12:52:42 -- keyring/file.sh@82 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fnPHm5T9ZS 00:24:43.819 12:52:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fnPHm5T9ZS 00:24:44.077 12:52:43 -- keyring/file.sh@83 -- # rm -f /tmp/tmp.fnPHm5T9ZS 00:24:44.077 12:52:43 -- keyring/file.sh@85 -- # get_refcnt key0 00:24:44.077 12:52:43 -- keyring/common.sh@12 -- # get_key key0 00:24:44.077 12:52:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:44.077 12:52:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:44.077 12:52:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:44.077 12:52:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:44.335 12:52:43 -- keyring/file.sh@85 -- # (( 1 == 1 )) 00:24:44.335 12:52:43 -- keyring/file.sh@87 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:44.335 12:52:43 -- common/autotest_common.sh@638 -- # local es=0 00:24:44.335 12:52:43 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:44.335 12:52:43 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:44.335 12:52:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:44.335 12:52:43 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:44.335 12:52:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:44.335 12:52:43 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:44.335 12:52:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:44.593 [2024-04-16 12:52:43.585689] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fnPHm5T9ZS': No such file or directory 00:24:44.593 [2024-04-16 12:52:43.585722] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:44.593 [2024-04-16 12:52:43.585748] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:44.593 [2024-04-16 12:52:43.585759] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:44.593 [2024-04-16 12:52:43.585770] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:44.593 request: 00:24:44.593 { 00:24:44.593 "name": "nvme0", 00:24:44.593 "trtype": "tcp", 00:24:44.593 "traddr": "127.0.0.1", 00:24:44.593 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:44.593 "adrfam": "ipv4", 00:24:44.593 "trsvcid": "4420", 00:24:44.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:44.593 "psk": "key0", 00:24:44.593 "method": "bdev_nvme_attach_controller", 00:24:44.593 "req_id": 1 00:24:44.593 } 00:24:44.593 Got JSON-RPC error response 00:24:44.593 response: 00:24:44.593 { 00:24:44.593 "code": -19, 00:24:44.593 "message": "No such device" 00:24:44.593 } 00:24:44.593 12:52:43 -- common/autotest_common.sh@641 -- # es=1 00:24:44.593 12:52:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:44.593 12:52:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:44.593 12:52:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:44.593 12:52:43 -- keyring/file.sh@89 -- # bperf_cmd keyring_file_remove_key key0 00:24:44.593 12:52:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:44.851 12:52:43 -- keyring/file.sh@92 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:44.851 12:52:43 -- keyring/common.sh@15 -- # local name key digest path 00:24:44.851 12:52:43 -- keyring/common.sh@17 -- # name=key0 00:24:44.851 12:52:43 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:44.851 12:52:43 -- keyring/common.sh@17 -- # digest=0 00:24:44.851 12:52:43 -- keyring/common.sh@18 -- # mktemp 00:24:44.851 12:52:43 -- keyring/common.sh@18 -- # path=/tmp/tmp.PbCcNgRYHO 00:24:44.851 12:52:43 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:44.851 12:52:43 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:44.851 12:52:43 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:44.851 12:52:43 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:44.851 12:52:43 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:44.851 12:52:43 -- nvmf/common.sh@693 -- # digest=0 00:24:44.851 12:52:43 -- nvmf/common.sh@694 -- # python - 00:24:44.851 12:52:43 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PbCcNgRYHO 00:24:44.851 12:52:43 -- keyring/common.sh@23 -- # echo /tmp/tmp.PbCcNgRYHO 00:24:44.851 12:52:43 -- keyring/file.sh@92 -- # key0path=/tmp/tmp.PbCcNgRYHO 00:24:44.851 12:52:43 -- keyring/file.sh@93 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PbCcNgRYHO 00:24:44.851 12:52:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PbCcNgRYHO 00:24:45.109 12:52:44 -- keyring/file.sh@94 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:45.109 12:52:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:45.376 nvme0n1 00:24:45.376 12:52:44 -- keyring/file.sh@96 -- # get_refcnt key0 00:24:45.376 12:52:44 -- keyring/common.sh@12 -- # get_key key0 00:24:45.376 12:52:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:45.376 12:52:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:45.376 12:52:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:45.376 12:52:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:45.657 12:52:44 -- keyring/file.sh@96 -- # (( 2 == 2 )) 00:24:45.657 12:52:44 -- keyring/file.sh@97 -- # bperf_cmd keyring_file_remove_key key0 00:24:45.657 12:52:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:45.915 12:52:44 -- keyring/file.sh@98 -- # get_key key0 00:24:45.915 12:52:44 -- keyring/file.sh@98 -- # jq -r .removed 00:24:45.915 12:52:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:45.915 12:52:44 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:45.915 12:52:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:46.172 12:52:45 -- keyring/file.sh@98 -- # [[ true == \t\r\u\e ]] 00:24:46.172 12:52:45 -- keyring/file.sh@99 -- # get_refcnt key0 00:24:46.172 12:52:45 -- keyring/common.sh@12 -- # get_key key0 00:24:46.172 12:52:45 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:46.172 12:52:45 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.172 12:52:45 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:46.172 12:52:45 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.430 12:52:45 -- keyring/file.sh@99 -- # (( 1 == 1 )) 00:24:46.430 12:52:45 -- keyring/file.sh@100 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:46.430 12:52:45 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:46.688 12:52:45 -- keyring/file.sh@101 -- # bperf_cmd keyring_get_keys 00:24:46.688 12:52:45 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.688 12:52:45 -- keyring/file.sh@101 -- # jq length 00:24:46.945 12:52:45 -- keyring/file.sh@101 -- # (( 0 == 0 )) 00:24:46.945 12:52:45 -- keyring/file.sh@104 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PbCcNgRYHO 00:24:46.946 12:52:45 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PbCcNgRYHO 00:24:47.203 12:52:46 -- keyring/file.sh@105 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.KF7IxFtwZj 00:24:47.203 12:52:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.KF7IxFtwZj 00:24:47.461 12:52:46 -- keyring/file.sh@106 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:47.461 12:52:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:47.718 nvme0n1 00:24:47.718 12:52:46 -- keyring/file.sh@109 -- # bperf_cmd save_config 00:24:47.719 12:52:46 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:47.976 12:52:46 -- keyring/file.sh@109 -- # config='{ 00:24:47.976 "subsystems": [ 00:24:47.976 { 00:24:47.976 "subsystem": "keyring", 00:24:47.976 "config": [ 00:24:47.976 { 00:24:47.976 "method": "keyring_file_add_key", 00:24:47.976 "params": { 00:24:47.976 "name": "key0", 00:24:47.976 "path": "/tmp/tmp.PbCcNgRYHO" 00:24:47.976 } 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "method": "keyring_file_add_key", 00:24:47.976 "params": { 00:24:47.976 "name": "key1", 00:24:47.976 "path": "/tmp/tmp.KF7IxFtwZj" 00:24:47.976 } 00:24:47.976 } 00:24:47.976 ] 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "subsystem": "iobuf", 00:24:47.976 "config": [ 00:24:47.976 { 00:24:47.976 "method": "iobuf_set_options", 00:24:47.976 "params": { 00:24:47.976 "small_pool_count": 8192, 00:24:47.976 "large_pool_count": 1024, 00:24:47.976 "small_bufsize": 8192, 00:24:47.976 "large_bufsize": 135168 00:24:47.976 } 00:24:47.976 } 00:24:47.976 ] 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "subsystem": "sock", 00:24:47.976 "config": [ 00:24:47.976 { 00:24:47.976 "method": "sock_impl_set_options", 00:24:47.976 "params": { 00:24:47.976 "impl_name": "posix", 00:24:47.976 "recv_buf_size": 2097152, 00:24:47.976 "send_buf_size": 2097152, 00:24:47.976 "enable_recv_pipe": true, 00:24:47.976 "enable_quickack": false, 00:24:47.976 "enable_placement_id": 0, 00:24:47.976 "enable_zerocopy_send_server": true, 00:24:47.976 "enable_zerocopy_send_client": false, 00:24:47.976 "zerocopy_threshold": 0, 00:24:47.976 "tls_version": 0, 00:24:47.976 "enable_ktls": false 00:24:47.976 } 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "method": "sock_impl_set_options", 00:24:47.976 "params": { 00:24:47.976 "impl_name": "ssl", 00:24:47.976 "recv_buf_size": 4096, 00:24:47.976 "send_buf_size": 4096, 00:24:47.976 "enable_recv_pipe": true, 00:24:47.976 "enable_quickack": false, 00:24:47.976 "enable_placement_id": 0, 00:24:47.976 "enable_zerocopy_send_server": true, 00:24:47.976 "enable_zerocopy_send_client": false, 00:24:47.976 "zerocopy_threshold": 0, 00:24:47.976 "tls_version": 0, 00:24:47.976 "enable_ktls": false 00:24:47.976 } 00:24:47.976 } 00:24:47.976 ] 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "subsystem": "vmd", 00:24:47.976 "config": [] 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "subsystem": "accel", 00:24:47.976 "config": [ 00:24:47.976 { 00:24:47.976 "method": "accel_set_options", 00:24:47.976 "params": { 00:24:47.976 "small_cache_size": 128, 00:24:47.976 "large_cache_size": 16, 00:24:47.976 "task_count": 2048, 00:24:47.976 "sequence_count": 2048, 00:24:47.976 "buf_count": 2048 00:24:47.976 } 00:24:47.976 } 00:24:47.976 ] 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "subsystem": "bdev", 00:24:47.976 "config": [ 00:24:47.976 { 00:24:47.976 "method": "bdev_set_options", 00:24:47.976 "params": { 00:24:47.976 "bdev_io_pool_size": 65535, 00:24:47.976 "bdev_io_cache_size": 256, 00:24:47.976 "bdev_auto_examine": true, 00:24:47.976 "iobuf_small_cache_size": 128, 00:24:47.976 "iobuf_large_cache_size": 16 00:24:47.976 } 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "method": "bdev_raid_set_options", 00:24:47.976 "params": { 00:24:47.976 "process_window_size_kb": 1024 00:24:47.976 } 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "method": "bdev_iscsi_set_options", 00:24:47.976 "params": { 00:24:47.976 "timeout_sec": 30 00:24:47.976 } 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "method": "bdev_nvme_set_options", 00:24:47.976 "params": { 00:24:47.976 "action_on_timeout": "none", 00:24:47.976 "timeout_us": 0, 00:24:47.976 "timeout_admin_us": 0, 00:24:47.976 "keep_alive_timeout_ms": 10000, 00:24:47.976 "arbitration_burst": 0, 00:24:47.976 "low_priority_weight": 0, 00:24:47.976 "medium_priority_weight": 0, 00:24:47.976 "high_priority_weight": 0, 00:24:47.976 "nvme_adminq_poll_period_us": 10000, 00:24:47.976 "nvme_ioq_poll_period_us": 0, 00:24:47.976 "io_queue_requests": 512, 00:24:47.976 "delay_cmd_submit": true, 00:24:47.976 "transport_retry_count": 4, 00:24:47.976 "bdev_retry_count": 3, 00:24:47.976 "transport_ack_timeout": 0, 00:24:47.976 "ctrlr_loss_timeout_sec": 0, 00:24:47.976 "reconnect_delay_sec": 0, 00:24:47.976 "fast_io_fail_timeout_sec": 0, 00:24:47.976 "disable_auto_failback": false, 00:24:47.976 "generate_uuids": false, 00:24:47.976 "transport_tos": 0, 00:24:47.976 "nvme_error_stat": false, 00:24:47.976 "rdma_srq_size": 0, 00:24:47.976 "io_path_stat": false, 00:24:47.976 "allow_accel_sequence": false, 00:24:47.976 "rdma_max_cq_size": 0, 00:24:47.976 "rdma_cm_event_timeout_ms": 0, 00:24:47.976 "dhchap_digests": [ 00:24:47.976 "sha256", 00:24:47.976 "sha384", 00:24:47.976 "sha512" 00:24:47.976 ], 00:24:47.976 "dhchap_dhgroups": [ 00:24:47.976 "null", 00:24:47.976 "ffdhe2048", 00:24:47.976 "ffdhe3072", 00:24:47.976 "ffdhe4096", 00:24:47.976 "ffdhe6144", 00:24:47.976 "ffdhe8192" 00:24:47.976 ] 00:24:47.976 } 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "method": "bdev_nvme_attach_controller", 00:24:47.976 "params": { 00:24:47.976 "name": "nvme0", 00:24:47.976 "trtype": "TCP", 00:24:47.976 "adrfam": "IPv4", 00:24:47.976 "traddr": "127.0.0.1", 00:24:47.976 "trsvcid": "4420", 00:24:47.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.976 "prchk_reftag": false, 00:24:47.976 "prchk_guard": false, 00:24:47.976 "ctrlr_loss_timeout_sec": 0, 00:24:47.976 "reconnect_delay_sec": 0, 00:24:47.976 "fast_io_fail_timeout_sec": 0, 00:24:47.976 "psk": "key0", 00:24:47.976 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:47.976 "hdgst": false, 00:24:47.976 "ddgst": false 00:24:47.976 } 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "method": "bdev_nvme_set_hotplug", 00:24:47.976 "params": { 00:24:47.976 "period_us": 100000, 00:24:47.976 "enable": false 00:24:47.976 } 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "method": "bdev_wait_for_examine" 00:24:47.976 } 00:24:47.976 ] 00:24:47.976 }, 00:24:47.976 { 00:24:47.976 "subsystem": "nbd", 00:24:47.976 "config": [] 00:24:47.976 } 00:24:47.976 ] 00:24:47.976 }' 00:24:47.977 12:52:46 -- keyring/file.sh@111 -- # killprocess 1299408 00:24:47.977 12:52:46 -- common/autotest_common.sh@936 -- # '[' -z 1299408 ']' 00:24:47.977 12:52:46 -- common/autotest_common.sh@940 -- # kill -0 1299408 00:24:47.977 12:52:46 -- common/autotest_common.sh@941 -- # uname 00:24:47.977 12:52:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:47.977 12:52:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1299408 00:24:47.977 12:52:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:47.977 12:52:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:47.977 12:52:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1299408' 00:24:47.977 killing process with pid 1299408 00:24:47.977 12:52:47 -- common/autotest_common.sh@955 -- # kill 1299408 00:24:47.977 Received shutdown signal, test time was about 1.000000 seconds 00:24:47.977 00:24:47.977 Latency(us) 00:24:47.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.977 =================================================================================================================== 00:24:47.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.977 12:52:47 -- common/autotest_common.sh@960 -- # wait 1299408 00:24:48.234 12:52:47 -- keyring/file.sh@114 -- # bperfpid=1300877 00:24:48.234 12:52:47 -- keyring/file.sh@116 -- # waitforlisten 1300877 /var/tmp/bperf.sock 00:24:48.234 12:52:47 -- common/autotest_common.sh@817 -- # '[' -z 1300877 ']' 00:24:48.234 12:52:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:48.234 12:52:47 -- keyring/file.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:48.234 12:52:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:48.234 12:52:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:48.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:48.234 12:52:47 -- keyring/file.sh@112 -- # echo '{ 00:24:48.234 "subsystems": [ 00:24:48.234 { 00:24:48.234 "subsystem": "keyring", 00:24:48.234 "config": [ 00:24:48.234 { 00:24:48.234 "method": "keyring_file_add_key", 00:24:48.234 "params": { 00:24:48.234 "name": "key0", 00:24:48.234 "path": "/tmp/tmp.PbCcNgRYHO" 00:24:48.234 } 00:24:48.234 }, 00:24:48.234 { 00:24:48.234 "method": "keyring_file_add_key", 00:24:48.234 "params": { 00:24:48.234 "name": "key1", 00:24:48.234 "path": "/tmp/tmp.KF7IxFtwZj" 00:24:48.234 } 00:24:48.234 } 00:24:48.234 ] 00:24:48.234 }, 00:24:48.234 { 00:24:48.234 "subsystem": "iobuf", 00:24:48.234 "config": [ 00:24:48.234 { 00:24:48.234 "method": "iobuf_set_options", 00:24:48.234 "params": { 00:24:48.234 "small_pool_count": 8192, 00:24:48.234 "large_pool_count": 1024, 00:24:48.234 "small_bufsize": 8192, 00:24:48.234 "large_bufsize": 135168 00:24:48.234 } 00:24:48.234 } 00:24:48.234 ] 00:24:48.234 }, 00:24:48.234 { 00:24:48.234 "subsystem": "sock", 00:24:48.234 "config": [ 00:24:48.234 { 00:24:48.234 "method": "sock_impl_set_options", 00:24:48.234 "params": { 00:24:48.234 "impl_name": "posix", 00:24:48.234 "recv_buf_size": 2097152, 00:24:48.234 "send_buf_size": 2097152, 00:24:48.234 "enable_recv_pipe": true, 00:24:48.234 "enable_quickack": false, 00:24:48.234 "enable_placement_id": 0, 00:24:48.234 "enable_zerocopy_send_server": true, 00:24:48.234 "enable_zerocopy_send_client": false, 00:24:48.234 "zerocopy_threshold": 0, 00:24:48.234 "tls_version": 0, 00:24:48.234 "enable_ktls": false 00:24:48.234 } 00:24:48.234 }, 00:24:48.234 { 00:24:48.234 "method": "sock_impl_set_options", 00:24:48.234 "params": { 00:24:48.234 "impl_name": "ssl", 00:24:48.234 "recv_buf_size": 4096, 00:24:48.234 "send_buf_size": 4096, 00:24:48.234 "enable_recv_pipe": true, 00:24:48.234 "enable_quickack": false, 00:24:48.235 "enable_placement_id": 0, 00:24:48.235 "enable_zerocopy_send_server": true, 00:24:48.235 "enable_zerocopy_send_client": false, 00:24:48.235 "zerocopy_threshold": 0, 00:24:48.235 "tls_version": 0, 00:24:48.235 "enable_ktls": false 00:24:48.235 } 00:24:48.235 } 00:24:48.235 ] 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "subsystem": "vmd", 00:24:48.235 "config": [] 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "subsystem": "accel", 00:24:48.235 "config": [ 00:24:48.235 { 00:24:48.235 "method": "accel_set_options", 00:24:48.235 "params": { 00:24:48.235 "small_cache_size": 128, 00:24:48.235 "large_cache_size": 16, 00:24:48.235 "task_count": 2048, 00:24:48.235 "sequence_count": 2048, 00:24:48.235 "buf_count": 2048 00:24:48.235 } 00:24:48.235 } 00:24:48.235 ] 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "subsystem": "bdev", 00:24:48.235 "config": [ 00:24:48.235 { 00:24:48.235 "method": "bdev_set_options", 00:24:48.235 "params": { 00:24:48.235 "bdev_io_pool_size": 65535, 00:24:48.235 "bdev_io_cache_size": 256, 00:24:48.235 "bdev_auto_examine": true, 00:24:48.235 "iobuf_small_cache_size": 128, 00:24:48.235 "iobuf_large_cache_size": 16 00:24:48.235 } 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "method": "bdev_raid_set_options", 00:24:48.235 "params": { 00:24:48.235 "process_window_size_kb": 1024 00:24:48.235 } 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "method": "bdev_iscsi_set_options", 00:24:48.235 "params": { 00:24:48.235 "timeout_sec": 30 00:24:48.235 } 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "method": "bdev_nvme_set_options", 00:24:48.235 "params": { 00:24:48.235 "action_on_timeout": "none", 00:24:48.235 "timeout_us": 0, 00:24:48.235 "timeout_admin_us": 0, 00:24:48.235 "keep_alive_timeout_ms": 10000, 00:24:48.235 "arbitration_burst": 0, 00:24:48.235 "low_priority_weight": 0, 00:24:48.235 "medium_priority_weight": 0, 00:24:48.235 "high_priority_weight": 0, 00:24:48.235 "nvme_adminq_poll_period_us": 10000, 00:24:48.235 "nvme_ioq_poll_period_us": 0, 00:24:48.235 "io_queue_requests": 512, 00:24:48.235 "delay_cmd_submit": true, 00:24:48.235 "transport_retry_count": 4, 00:24:48.235 "bdev_retry_count": 3, 00:24:48.235 "transport_ack_timeout": 0, 00:24:48.235 "ctrlr_loss_timeout_sec": 0, 00:24:48.235 "reconnect_delay_sec": 0, 00:24:48.235 "fast_io_fail_timeout_sec": 0, 00:24:48.235 "disable_auto_failback": false, 00:24:48.235 "generate_uuids": false, 00:24:48.235 "transport_tos": 0, 00:24:48.235 "nvme_error_stat": false, 00:24:48.235 "rdma_srq_size": 0, 00:24:48.235 "io_path_stat": false, 00:24:48.235 "allow_accel_sequence": false, 00:24:48.235 "rdma_max_cq_size": 0, 00:24:48.235 "rdma_cm_event_timeout_ms": 0, 00:24:48.235 "dhchap_digests": [ 00:24:48.235 "sha256", 00:24:48.235 "sha384", 00:24:48.235 "sha512" 00:24:48.235 ], 00:24:48.235 "dhchap_dhgroups": [ 00:24:48.235 "null", 00:24:48.235 "ffdhe2048", 00:24:48.235 "ffdhe3072", 00:24:48.235 "ffdhe4096", 00:24:48.235 "ffdhe6144", 00:24:48.235 "ffdhe8192" 00:24:48.235 ] 00:24:48.235 } 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "method": "bdev_nvme_attach_controller", 00:24:48.235 "params": { 00:24:48.235 "name": "nvme0", 00:24:48.235 "trtype": "TCP", 00:24:48.235 "adrfam": "IPv4", 00:24:48.235 "traddr": "127.0.0.1", 00:24:48.235 "trsvcid": "4420", 00:24:48.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:48.235 "prchk_reftag": false, 00:24:48.235 "prchk_guard": false, 00:24:48.235 "ctrlr_loss_timeout_sec": 0, 00:24:48.235 "reconnect_delay_sec": 0, 00:24:48.235 "fast_io_fail_timeout_sec": 0, 00:24:48.235 "psk": "key0", 00:24:48.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:48.235 "hdgst": false, 00:24:48.235 "ddgst": false 00:24:48.235 } 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "method": "bdev_nvme_set_hotplug", 00:24:48.235 "params": { 00:24:48.235 "period_us": 100000, 00:24:48.235 "enable": false 00:24:48.235 } 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "method": "bdev_wait_for_examine" 00:24:48.235 } 00:24:48.235 ] 00:24:48.235 }, 00:24:48.235 { 00:24:48.235 "subsystem": "nbd", 00:24:48.235 "config": [] 00:24:48.235 } 00:24:48.235 ] 00:24:48.235 }' 00:24:48.235 12:52:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:48.235 12:52:47 -- common/autotest_common.sh@10 -- # set +x 00:24:48.494 [2024-04-16 12:52:47.342583] Starting SPDK v24.05-pre git sha1 1b4773b8f / DPDK 24.03.0 initialization... 00:24:48.494 [2024-04-16 12:52:47.342672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300877 ] 00:24:48.494 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.494 [2024-04-16 12:52:47.416178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.494 [2024-04-16 12:52:47.527591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.752 [2024-04-16 12:52:47.708408] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.318 12:52:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:49.318 12:52:48 -- common/autotest_common.sh@850 -- # return 0 00:24:49.318 12:52:48 -- keyring/file.sh@117 -- # bperf_cmd keyring_get_keys 00:24:49.318 12:52:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:49.318 12:52:48 -- keyring/file.sh@117 -- # jq length 00:24:49.577 12:52:48 -- keyring/file.sh@117 -- # (( 2 == 2 )) 00:24:49.577 12:52:48 -- keyring/file.sh@118 -- # get_refcnt key0 00:24:49.577 12:52:48 -- keyring/common.sh@12 -- # get_key key0 00:24:49.577 12:52:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:49.577 12:52:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:49.577 12:52:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:49.577 12:52:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:49.835 12:52:48 -- keyring/file.sh@118 -- # (( 2 == 2 )) 00:24:49.835 12:52:48 -- keyring/file.sh@119 -- # get_refcnt key1 00:24:49.835 12:52:48 -- keyring/common.sh@12 -- # get_key key1 00:24:49.835 12:52:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:49.835 12:52:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:49.835 12:52:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:49.835 12:52:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:50.092 12:52:48 -- keyring/file.sh@119 -- # (( 1 == 1 )) 00:24:50.092 12:52:48 -- keyring/file.sh@120 -- # bperf_cmd bdev_nvme_get_controllers 00:24:50.092 12:52:48 -- keyring/file.sh@120 -- # jq -r '.[].name' 00:24:50.092 12:52:48 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:50.350 12:52:49 -- keyring/file.sh@120 -- # [[ nvme0 == nvme0 ]] 00:24:50.350 12:52:49 -- keyring/file.sh@1 -- # cleanup 00:24:50.350 12:52:49 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PbCcNgRYHO /tmp/tmp.KF7IxFtwZj 00:24:50.350 12:52:49 -- keyring/file.sh@20 -- # killprocess 1300877 00:24:50.350 12:52:49 -- common/autotest_common.sh@936 -- # '[' -z 1300877 ']' 00:24:50.350 12:52:49 -- common/autotest_common.sh@940 -- # kill -0 1300877 00:24:50.350 12:52:49 -- common/autotest_common.sh@941 -- # uname 00:24:50.350 12:52:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:50.350 12:52:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1300877 00:24:50.350 12:52:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:50.350 12:52:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:50.350 12:52:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1300877' 00:24:50.350 killing process with pid 1300877 00:24:50.350 12:52:49 -- common/autotest_common.sh@955 -- # kill 1300877 00:24:50.350 Received shutdown signal, test time was about 1.000000 seconds 00:24:50.350 00:24:50.350 Latency(us) 00:24:50.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.350 =================================================================================================================== 00:24:50.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:50.350 12:52:49 -- common/autotest_common.sh@960 -- # wait 1300877 00:24:50.608 12:52:49 -- keyring/file.sh@21 -- # killprocess 1299273 00:24:50.608 12:52:49 -- common/autotest_common.sh@936 -- # '[' -z 1299273 ']' 00:24:50.608 12:52:49 -- common/autotest_common.sh@940 -- # kill -0 1299273 00:24:50.608 12:52:49 -- common/autotest_common.sh@941 -- # uname 00:24:50.608 12:52:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:50.608 12:52:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1299273 00:24:50.608 12:52:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:50.608 12:52:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:50.608 12:52:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1299273' 00:24:50.608 killing process with pid 1299273 00:24:50.608 12:52:49 -- common/autotest_common.sh@955 -- # kill 1299273 00:24:50.608 [2024-04-16 12:52:49.588892] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:50.608 12:52:49 -- common/autotest_common.sh@960 -- # wait 1299273 00:24:51.174 00:24:51.174 real 0m15.215s 00:24:51.174 user 0m36.817s 00:24:51.174 sys 0m3.335s 00:24:51.174 12:52:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:51.174 12:52:50 -- common/autotest_common.sh@10 -- # set +x 00:24:51.174 ************************************ 00:24:51.174 END TEST keyring_file 00:24:51.174 ************************************ 00:24:51.174 12:52:50 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:24:51.174 12:52:50 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:24:51.174 12:52:50 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:24:51.174 12:52:50 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:24:51.174 12:52:50 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:24:51.174 12:52:50 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:24:51.174 12:52:50 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:24:51.174 12:52:50 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:24:51.174 12:52:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:51.174 12:52:50 -- common/autotest_common.sh@10 -- # set +x 00:24:51.174 12:52:50 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:24:51.174 12:52:50 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:24:51.174 12:52:50 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:24:51.174 12:52:50 -- common/autotest_common.sh@10 -- # set +x 00:24:53.075 INFO: APP EXITING 00:24:53.075 INFO: killing all VMs 00:24:53.075 INFO: killing vhost app 00:24:53.075 WARN: no vhost pid file found 00:24:53.075 INFO: EXIT DONE 00:24:54.009 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:24:54.272 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:24:54.272 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:24:54.272 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:24:54.272 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:24:54.272 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:24:54.272 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:24:54.272 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:24:54.272 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:24:54.272 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:24:54.272 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:24:54.272 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:24:54.272 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:24:54.272 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:24:54.272 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:24:54.272 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:24:54.272 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:24:55.654 Cleaning 00:24:55.654 Removing: /var/run/dpdk/spdk0/config 00:24:55.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:55.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:55.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:55.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:55.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:24:55.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:24:55.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:24:55.654 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:24:55.912 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:55.912 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:55.912 Removing: /var/run/dpdk/spdk1/config 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:24:55.912 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:55.912 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:55.912 Removing: /var/run/dpdk/spdk1/mp_socket 00:24:55.912 Removing: /var/run/dpdk/spdk2/config 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:24:55.912 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:55.912 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:55.912 Removing: /var/run/dpdk/spdk3/config 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:24:55.912 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:55.912 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:55.912 Removing: /var/run/dpdk/spdk4/config 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:24:55.912 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:55.912 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:55.912 Removing: /dev/shm/bdev_svc_trace.1 00:24:55.912 Removing: /dev/shm/nvmf_trace.0 00:24:55.912 Removing: /dev/shm/spdk_tgt_trace.pid1050766 00:24:55.912 Removing: /var/run/dpdk/spdk0 00:24:55.912 Removing: /var/run/dpdk/spdk1 00:24:55.912 Removing: /var/run/dpdk/spdk2 00:24:55.912 Removing: /var/run/dpdk/spdk3 00:24:55.912 Removing: /var/run/dpdk/spdk4 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1048795 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1049675 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1050766 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1051256 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1051946 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1052092 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1052925 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1054245 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1055416 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1055620 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1055815 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1056143 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1056354 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1056518 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1056688 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1056991 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1057593 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1059960 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1060131 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1060299 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1060432 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1060746 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1060873 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1061216 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1061325 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1061625 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1061788 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1062032 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1062113 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1062557 00:24:55.912 Removing: /var/run/dpdk/spdk_pid1062728 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1063291 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1063727 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1063771 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1063975 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1064150 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1064423 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1064590 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1064855 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1065038 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1065210 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1065486 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1065651 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1065935 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1066098 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1066370 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1066549 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1066712 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1066997 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1067161 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1067446 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1067614 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1067884 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1068061 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1068232 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1068426 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1068779 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1071278 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1100513 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1103672 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1110413 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1114079 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1116740 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1117257 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1125382 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1125393 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1126042 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1126584 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1127241 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1127771 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1127778 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1127915 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1128051 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1128054 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1128712 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1129369 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1129913 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1130315 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1130432 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1130576 00:24:56.171 Removing: /var/run/dpdk/spdk_pid1131465 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1132323 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1138757 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1139034 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1141989 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1146102 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1148161 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1155276 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1161194 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1162387 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1163055 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1175152 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1177792 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1180881 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1182065 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1183379 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1183519 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1183657 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1183678 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1184113 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1185431 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1186289 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1186604 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1188345 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1188912 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1189461 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1192288 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1198913 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1201801 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1206034 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1207418 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1208828 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1211924 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1214591 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1219675 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1219787 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1222866 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1223115 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1223255 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1223524 00:24:56.172 Removing: /var/run/dpdk/spdk_pid1223530 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1226590 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1227032 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1230003 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1231984 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1235834 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1239438 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1244301 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1244304 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1258639 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1259175 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1259718 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1260253 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1260978 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1261388 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1261919 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1262454 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1265377 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1265527 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1269749 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1269928 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1271611 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1277269 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1277279 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1280833 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1282245 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1284275 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1285136 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1286549 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1287429 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1293354 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1293745 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1294137 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1295935 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1296222 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1296613 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1299273 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1299408 00:24:56.430 Removing: /var/run/dpdk/spdk_pid1300877 00:24:56.430 Clean 00:24:56.430 12:52:55 -- common/autotest_common.sh@1437 -- # return 0 00:24:56.430 12:52:55 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:24:56.430 12:52:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:56.430 12:52:55 -- common/autotest_common.sh@10 -- # set +x 00:24:56.688 12:52:55 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:24:56.688 12:52:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:56.688 12:52:55 -- common/autotest_common.sh@10 -- # set +x 00:24:56.688 12:52:55 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:24:56.688 12:52:55 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:24:56.688 12:52:55 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:24:56.688 12:52:55 -- spdk/autotest.sh@389 -- # hash lcov 00:24:56.688 12:52:55 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:56.688 12:52:55 -- spdk/autotest.sh@391 -- # hostname 00:24:56.688 12:52:55 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:24:56.688 geninfo: WARNING: invalid characters removed from testname! 00:25:23.295 12:53:21 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:27.486 12:53:25 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:30.022 12:53:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:32.560 12:53:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:35.853 12:53:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:38.401 12:53:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:40.937 12:53:39 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:40.937 12:53:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.937 12:53:39 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:40.937 12:53:39 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.937 12:53:39 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.937 12:53:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.937 12:53:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.937 12:53:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.937 12:53:39 -- paths/export.sh@5 -- $ export PATH 00:25:40.937 12:53:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.938 12:53:39 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:25:40.938 12:53:39 -- common/autobuild_common.sh@435 -- $ date +%s 00:25:40.938 12:53:39 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713264819.XXXXXX 00:25:40.938 12:53:39 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713264819.caSWws 00:25:40.938 12:53:39 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:25:40.938 12:53:39 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:25:40.938 12:53:39 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:25:40.938 12:53:39 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:40.938 12:53:39 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:40.938 12:53:39 -- common/autobuild_common.sh@451 -- $ get_config_params 00:25:40.938 12:53:39 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:25:40.938 12:53:39 -- common/autotest_common.sh@10 -- $ set +x 00:25:40.938 12:53:39 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:25:40.938 12:53:39 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:25:40.938 12:53:39 -- pm/common@17 -- $ local monitor 00:25:40.938 12:53:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:40.938 12:53:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1310007 00:25:40.938 12:53:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:40.938 12:53:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1310009 00:25:40.938 12:53:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:40.938 12:53:39 -- pm/common@21 -- $ date +%s 00:25:40.938 12:53:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1310011 00:25:40.938 12:53:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:40.938 12:53:39 -- pm/common@21 -- $ date +%s 00:25:40.938 12:53:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1310014 00:25:40.938 12:53:39 -- pm/common@21 -- $ date +%s 00:25:40.938 12:53:39 -- pm/common@26 -- $ sleep 1 00:25:40.938 12:53:39 -- pm/common@21 -- $ date +%s 00:25:40.938 12:53:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713264819 00:25:40.938 12:53:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713264819 00:25:40.938 12:53:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713264819 00:25:40.938 12:53:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713264819 00:25:40.938 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713264819_collect-vmstat.pm.log 00:25:40.938 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713264819_collect-bmc-pm.bmc.pm.log 00:25:40.938 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713264819_collect-cpu-load.pm.log 00:25:40.938 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713264819_collect-cpu-temp.pm.log 00:25:41.874 12:53:40 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:25:41.874 12:53:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:25:41.874 12:53:40 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:41.874 12:53:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:41.874 12:53:40 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:41.874 12:53:40 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:41.874 12:53:40 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:41.874 12:53:40 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:41.874 12:53:40 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:25:42.134 12:53:40 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:42.134 12:53:40 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:42.134 12:53:40 -- pm/common@30 -- $ signal_monitor_resources TERM 00:25:42.134 12:53:40 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:25:42.134 12:53:40 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:42.134 12:53:40 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:42.134 12:53:40 -- pm/common@45 -- $ pid=1310029 00:25:42.134 12:53:40 -- pm/common@52 -- $ sudo kill -TERM 1310029 00:25:42.134 12:53:40 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:42.134 12:53:40 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:42.134 12:53:40 -- pm/common@45 -- $ pid=1310026 00:25:42.134 12:53:40 -- pm/common@52 -- $ sudo kill -TERM 1310026 00:25:42.134 12:53:41 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:42.134 12:53:41 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:42.134 12:53:41 -- pm/common@45 -- $ pid=1310028 00:25:42.134 12:53:41 -- pm/common@52 -- $ sudo kill -TERM 1310028 00:25:42.134 12:53:41 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:42.134 12:53:41 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:42.134 12:53:41 -- pm/common@45 -- $ pid=1310027 00:25:42.134 12:53:41 -- pm/common@52 -- $ sudo kill -TERM 1310027 00:25:42.134 + [[ -n 961862 ]] 00:25:42.134 + sudo kill 961862 00:25:42.147 [Pipeline] } 00:25:42.165 [Pipeline] // stage 00:25:42.170 [Pipeline] } 00:25:42.188 [Pipeline] // timeout 00:25:42.193 [Pipeline] } 00:25:42.210 [Pipeline] // catchError 00:25:42.215 [Pipeline] } 00:25:42.232 [Pipeline] // wrap 00:25:42.237 [Pipeline] } 00:25:42.252 [Pipeline] // catchError 00:25:42.260 [Pipeline] stage 00:25:42.262 [Pipeline] { (Epilogue) 00:25:42.277 [Pipeline] catchError 00:25:42.279 [Pipeline] { 00:25:42.293 [Pipeline] echo 00:25:42.295 Cleanup processes 00:25:42.298 [Pipeline] sh 00:25:42.577 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:42.577 1310150 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:25:42.577 1310294 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:42.591 [Pipeline] sh 00:25:42.871 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:42.871 ++ grep -v 'sudo pgrep' 00:25:42.871 ++ awk '{print $1}' 00:25:42.871 + sudo kill -9 1310150 00:25:42.883 [Pipeline] sh 00:25:43.166 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:51.302 [Pipeline] sh 00:25:51.586 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:51.586 Artifacts sizes are good 00:25:51.601 [Pipeline] archiveArtifacts 00:25:51.608 Archiving artifacts 00:25:51.793 [Pipeline] sh 00:25:52.080 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:25:52.096 [Pipeline] cleanWs 00:25:52.106 [WS-CLEANUP] Deleting project workspace... 00:25:52.106 [WS-CLEANUP] Deferred wipeout is used... 00:25:52.113 [WS-CLEANUP] done 00:25:52.115 [Pipeline] } 00:25:52.134 [Pipeline] // catchError 00:25:52.146 [Pipeline] sh 00:25:52.426 + logger -p user.info -t JENKINS-CI 00:25:52.435 [Pipeline] } 00:25:52.452 [Pipeline] // stage 00:25:52.457 [Pipeline] } 00:25:52.475 [Pipeline] // node 00:25:52.481 [Pipeline] End of Pipeline 00:25:52.522 Finished: SUCCESS